Using BayesShrink, BiShrink, Weighted BayesShrink, and Weighted BiShrink in NSST and SWT for Despeckling SAR Images
 Nikou Farhangi^{1} and
 Sedigheh Ghofrani^{1}Email author
https://doi.org/10.1186/s1364001802443
© The Author(s). 2018
Received: 31 December 2016
Accepted: 4 January 2018
Published: 18 January 2018
Abstract
Synthetic aperture radar (SAR) images are inherently degraded by multiplicative speckle noise where thresholdingbased methods in the transform domain are appropriate. Being sparse, the coefficients in the transformed domain play a key role in the performance of any thresholding methods. It has been shown that the coefficients of nonsubsampled shearlet transform (NSST) are sparser than those of stationary wavelet transform (SWT) for either clean or noisy images. Therefore, it is expected that thresholdingbased methods in NSST outperform those in the SWT domain. In this paper, BayesShrink, BiShrink, weighted BayesShrink, and weighted BiShrink in NSST and SWT domains are compared in terms of subjective and objective image assessment. As BayesShrink try to find the optimum threshold for every subband, BiShrink uses coefficients, name “parent,” to clean up coefficients called “child,” and the weighted methods consider the coefficients’ noise efficiency, which implies that subbands in the transform domain may be affected by noise differently. Two models for considering the parent in the NSST domain are proposed. In addition, for both BayesShrink and BiShrink, considering the weighting factor (coefficients noise efficiency) would improve the performance of the corresponding methods as well. Experimental results show that the weightedBiShrink despeckling approach in the NSST domain gives an outstanding performance when tested with both artificially speckled images and real SAR images.
Keywords
1 Introduction
Synthetic aperture radar (SAR) can be used in a wide variety of applications in the military, geology, scientific discoveries, mapping, and surveillance of Earth. The main advantages of SAR is its ability to operate under diverse weather conditions such as darkness, rain, snow, fog, and dust, where SAR exhibits speckle noise. It must be stressed that speckle is noiselike, but it is not noise; it is a real electromagnetic measurement. Therefore, when the radar scans a uniform surface, the SAR images emerge as dramatic changes in gray, with some resolution cells shown as a dark spot, and others shown as a bright spot, depicting granular ups and downs. The spots rooted in a coherent superposition of the radar echo are called speckle noise.
For any coherent imaging like SAR, such as sonar and ultrasound, despeckling is an important process for image enhancement. Removing speckles and preserving edges are the main goals of enhancement approaches. In general, the despeckling of SAR images is carried out in either the spatial or transformed domain [1]. Despite their low computational complexity, the performance of spatial domain filters is often not as well as the transformed domain algorithms [2]. Wavelet [3] is a wellknown multiscale transform that can effectively mitigate point singularities for onedimensional signals. For linear singularities in images, twodimensional separable wavelets were used. However, the lack of directionality motivated researchers to propose the curvelet [4] and contourlet [5, 6] methods, which use the multiscale transform followed by the directional filter bank. Their basis functions with wedgeshaped or rectangular support regions provide good sparse representations for high dimensional singularities. Recently, shearlet transform (ST) based on an affine system [7], which can sparsely represent an image and has flexible orientation, has been proposed [8]. This new representation is based on a simple and rigorous mathematical framework that not only provides a more flexible theoretical tool for the geometric representation of multidimensional data, but is also easy to implement. In addition, shearlet exhibits highly directional sensitivity and is spatially localized [8–10]. ST has been applied in various practical problems such as total variation for denoising [11], deconvolution [12], SAR despeckling [2, 13], and Bayesian shearlet shrinkage for SAR despeckling via sparse representation [14]. Further, Markarian and Ghofrani [15] proposed a new method based on compressive sensing for speckle reduction of SAR images. However, image processing and video coding have made remarkable progress in recent years [16, 17].
Thresholding is a common method for denoising in the transform domain [18], where finding the optimum threshold value is the main problem. Regarding the methods, VISUShrink [19] obtains the universal threshold value, whereas SUREShrink [20], BayesShrink [21, 22], and bivariate shrinkage (BiShrink) [23–26] obtain the threshold values adaptively for every subband. Among these approaches, BayesShrink is a wellknown method used in the nonsubsampled shearlet transform (NSST) domain [27], and BiShrink functions using the Bayesian estimation theory applied in the wavelet [20, 23, 28], contourlet [24, 29], and shearlet [25] domains are used as well.
In this paper, we compare the performances of BayesShrink, BiShrink, weighted BayesShrink, and weighted BiShrink in NSST and stationary wavelet transform (SWT) domains in terms of subjective and objective image assessment. As BayesShrink tries to find the optimum threshold for every subband, BiShrink uses coefficients named parent to clean up coefficients called child, and the weighted methods consider the coefficients’ noise efficiency, which imply that the subbands in the transform domain may be affected by noise differently. Two models for considering the parent in the NSST domain are proposed. In addition, for both BayesShrink and BiShrink, considering the weighting factor (coefficients noise efficiency) would improve the performance of the corresponding methods as well. The novel Bishrink despeckling method named BINSST is developed, and the weighted Bishrink are used in NSST and SWT domains for the first time, where the approaches are named WBINSST and WBISWT, respectively. Considering the coefficients’ noise efficiency in SWT and NSST to obtain the weighting factor and the optimum threshold value is the main contribution of this paper. However, the performance of three proposed methods in SWT and four new approaches in NSST are compared with five stateoftheart papers ([2, 13, 15, 27, 30]) in terms of subjective and objective image evaluations when artificially speckled and real SAR images are denoised.
The paper is organized as follows: Section 2 presents the preliminary of the speckle noise model, the BayesShrink and BiShrink methods, as well as noise estimations and signal variances. Further, ST and the following NSST are explained in Section 3. The proposed methods based on BiShrink in the NSST domain are explained in Section 4. Section 5 shows the experimental results and finally, Section 6 concludes the paper.
2 Thresholdbased SAR image despeckling
Determining the optimum threshold value is the main problem in any thresholdingbased method. VISUShrink [19] obtains the universal threshold value whereas SUREShrink [20], BayesShrink [21, 22, 28], and BiShrink [23–26, 31] determine the adaptive threshold values for every subband. In the following, the speckle noise model is introduced, the BayesShrink and BiShrink methods are explained, and finally, the median for power noise estimation in the transformed domain is expressed.
2.1 Speckle noise model
2.2 BayesShrink denoising in transformed domain
Based on the classical soft shrinkage function, Eq. (8) is rewritten \( \widehat{X}(Y)=\mathrm{soft}\left(Y,\frac{\sqrt{2}{\sigma}_N^2}{\sigma}\right) \).
2.3 BiShrink denoising in transformed domain
In 2002, a novel denoising method named BiShrink was proposed [23, 35]. In fact, BiShrink is a simple nonlinear shrinkage function in the transformed domain, with subbands known as child and parent. To obtain the threshold value for denoising a child subband, BiShrink also uses the coefficients of the parent subband.
2.4 Estimating noise and signal variance
3 Nonsubsampled shearlet transform (NSST)
A novel multiscale directional representation system called shearlets was proposed in 2005 [7]. Two properties, multiresolution and sparsity, render the ST attractive in science and engineering [10, 11, 37]. In the following, the continuous and discrete STs are explained briefly.
In general, using subsampling operations causes variant shifts in a transform. Therefore, by omitting the up and downsampling blocks, SWT [30] in 2003 and NSST [10] in 2008 were proposed. In the nonsubsampled transform version, as the coefficients do not decimate between the decomposition levels, all subband sizes are the same as the original input image. Therefore, SWT and NSST require more computation and storage room in comparison with the conventional WT and ST.
The Sd parameter for noisefree image and three different noise power
Sd  Free noise  \( {\sigma}_N^2=0.05 \)  \( {\sigma}_N^2=0.1 \)  \( {\sigma}_N^2=0.15 \)  

Barbara  SWT  0.1864  0.1936  0.1969  0.1986 
NSST  0.1563  0.1760  0.1812  0.1843  
Lena  SWT  0.1731  0.1913  0.1950  0.1969 
NSST  0.1366  0.1759  0.1813  0.1841 
4 Proposed methods
In the first part of this section, the image assessment parameters to evaluate denoising methods are explained, and the mutual information (MI) to measure the statistical dependency between a child and its corresponding parent coefficients is expressed. Subsequently, the models for BiShrink in the transformed domain are proposed and the weighted BiShrink in the NSST and SWT domains are applied for the first time.
Figure 1 shows the block diagram of the BiShrink despeckling method in the NSST and SWT domains. In general, the BiShrink denoising algorithm consists of a threestep process: estimating \( {\widehat{\sigma}}_N \) for every subband according to Eq. (20), estimating \( {\widehat{\sigma}}_Y^2 \) and \( {\widehat{\sigma}}^2 \) based on Eqs. (21) and (22), and obtaining the noisefree coefficients using Eq. (18).
4.1 Image assessment parameters
Among the image assessment parameters used to evaluate the performance of a despeckling algorithm, in this paper, we have chosen the peak signaltonoise ratio (PSNR) [39] and structural similarity (SSIM) [40] as full references and equivalent number of looks (ENL) [13], respectively; and mean square difference (MSD) [41] and edge save index (ESI) [13] as no references.
The SSIM index measures the similarity between the original and the despeckled image through a local statistical analysis (i.e., mean, variance, and covariance between the unfiltered and despeckled pixel values from the sliding window). SSIM ∈ (‐1, 1) and a bad similarity between the original and the despeckled image corresponds to SSIM → ‐ 1, whereas a good similarity will be indicated by values SSIM → 1.
4.2 Parent and child coefficient models
Although a fair amount of research on image denoising in the transformed domain has been carried out [31, 35], thresholding due to simplicity is still attractive [19, 20]. However, thresholding in a bivariate MAP that exploits the dependency between coefficients [25, 45] gives appropriate results.

Model 1: considering X_{2}(OPP) as the parent, the method is named BINSST (1).

Model 2: considering X_{2}(NC) as the parent, the method is named BINSST (2).
Although model 1 was proposed for the ST domain [25], we used it in the NSST domain as well. In addition, proposing model 2 in the NSST domain according to the MI shown in Fig. 4 is the contribution herein, where model 2 is expected to outperform model 1 in the NSST domain.
4.3 Weighted shrinkage method
As the nonsubsampled version is used, all subband sizes are the same as the input image (i.e., mn). The MSE of all subbands for the eight mentioned test images in the SWT and NSST domains is obtained, and the average values are shown in Fig. 5. As expected, all subbands of the SWT (Fig. 5a2) are affected by noise approximately equal to the subbands in NSST where some subbands are more robust against noise than others (Fig. 5b2).
Using the weighting factor, α_{ℓ, k} results in the optimum threshold value in Eq. (33), which is then applied to the coefficients as the soft thresholding in Eq. (18). The corresponding methods are named WBISWT, WBINSST(1), and WBINSST(2).
5 Experimental results and discussion
In this paper, we used the images Barbara, Lena, House, Boat, Goldhill, Fingerprint, Cameraman, and Peppers of size 512 × 512 pixels and 256 Gy levels as the test images: images Farmland, Peninsula, and Shipping Terminal [46] of sizes 500 × 500 pixels and Aircraft [47] of size 512 × 512 pixels as the real SAR images. The three images (Farmland, Peninsula, and Shipping Terminal) [46] are part of the SAR images collected by RADARSAT1 in the Fine Beam 2 mode on June 16, 2002. Most of the illuminated scenes was in Delta, British Columbia, Canada. The radar was operating in the Cband with HH polarization. These three parts have good image characteristics such as having grains, as well as many high and lowfrequency parts. The mini SAR image “Aircraft” [47] was collected from the Kirtland AFB region on August 27, 2007 in the Kaband and Kuband.
An input signal is decomposed into three levels using SWT and NSST. According to the numbered subbands shown in Fig. 5a1, b1, the SWT has three subbands in each level whereas the NSST has 16, eight, and four subbands for the 1st, 2nd, and 3rd decomposition levels, respectively. The block diagram in Fig. 1 indicates that the homomorphic framework (using the logarithm function at the first and exponential at the end) is used for both test images and real SAR images. The method to obtain the threshold value for the shrinkage methods and the weighted versions are explained in detail in Section 4.3.
Here, in the SWT domain, we evaluate the performance of BayesShrink (BSWT) [30], weighted BayesShrink (WBSWT), BiShrink (BISWT), and weighted BiShrink (WBISWT); in the NSST domain, we present the results achieved from BayesShrink (BNSST) [2], weighted BayesShrink (WBNSST) [27], BiShrink based on model 1 and model 2 ((BINSST(1), BINSST(2)), and weighted BiShrink based on model 1 and model 2 ((WBINSST(1), WBINSST(2)) in terms of subjective and objective criteria. Furthermore, two methods in the NSST domain, GΓD ‐ NSST [13], NIG ‐ NSST [13], and a highorder total variation method based on compressive sensing called HighTV [15] are also applied to compare the achieved performance among the proposed methods and stateoftheart papers under the same circumstances. For this purpose, we had to implement all the methods ([2, 13, 15, 27, 30]).
Two objective assessment parameters for Barbara to compare with methods in SWT and NSST domains
\( {\sigma}_N^2=0.05 \)  \( {\sigma}_N^2=0.1 \)  \( {\sigma}_N^2=0.15 \)  

PSNR (dB) ↑  SSIM ↑  PSNR (dB) ↑  SSIM ↑  PSNR (dB) ↑  SSIM ↑  
SWT  BSWT [30]  26.2485  0.999962  24.5411  0.999883  23.5183  0.999734 
WBSWT  26.3588  0.999963  24.5631  0.999886  23.6646  0.999738  
BISWT  27.6081  0.999963  25.4481  0.999894  24.0742  0.999740  
WBISWT  27.6893  0.999963  25.4621  0.999894  24.1642  0.999741  
NSST  BNSST [2]  28.2209  0.999969  26.1606  0.999894  24.7003  0.999747 
WBNSST [27]  28.2413  0.999969  26.1921  0.999894  24.7371  0.999746  
BINSST(1)  28.2823  0.999969  26.2552  0.999894  24.8769  0.999742  
WBINSST(1)  28.2971  0.999969  26.2861  0.999894  24.9112  0.999744  
BINSST(2)  28.6433  0.999974  26.5448  0.999904  25.0982  0.999764  
WBINSST(2)  28.6819  0.99974  26.5694  0.999905  25.1537  0.999763 
Obtained PSNRs for denoising the eight test images in NSST domain when \( {\sigma}_n^2=0.1 \). The algorithm was run 30 times for every image, and the average PSNR is reported
PSNR(dB)  

Barbara  Lena  House  Boat  Goldhill  Fingerprint  Cameraman  Peppers  
NSST  BNSST  26.16  28.27  28.41  25.89  26.13  20.66  28.32  27.75 
WBNSST  26.19  28.28  28.41  25.86  26.10  20.68  28.32  27.76  
BINSST(1)  26.25  28.42  28.52  26.06  26.28  20.81  28.39  27.77  
WBINSST  26.28  28.42  28.52  26.05  26.25  20.84  28.40  27.78  
BINSST(2)  26.54  28.47  28.46  26.33  26.74  22.06  28.01  27.50  
WBINSST(2)  26.56  28.50  28.52  26.34  26.72  22.08  28.03  27.52 
Four no reference parameters for Peninsula SAR image to compare with methods in SWT and NSST domains
Peninsula (real SAR image)  

ENL ↑  MSD ↑  ESI^{v} ↑  ESI^{h} ↑  
SWT  NOISY  2.9993  0.0000  1.0000  1.0000 
BSWT [30]  21.2362  0.0005  0.3020  0.0829  
WBSWT  25.4732  0.0005  0.2524  0.1022  
BISWT  32.8736  0.0005  0.3580  0.2509  
WBISWT  60.3220  0.0006  0.2920  0.3107  
NSST  BNSST [2]  58.4842  0.0006  0.8174  0.6288 
WBNSST [27]  97.9584  0.0006  0.8125  0.6401  
BINSST(1)  106.9630  0.0006  0.8164  0.6299  
WBINSST(1)  119.1139  0.0006  0.8071  0.6407  
BINSST(2)  112.0674  0.0006  0.8096  0.6558  
WBINSST(2)  152.9135  0.0006  0.7936  0.6706 
Farmland (real SAR image)  

ENL ↑  MSD ↑  ESI^{v} ↑  ESI^{h} ↑  
NOISY  3.0833  0.000000  1.0000  1.0000 
HighTV [15]  43.9698  0.001603  0.0200  0.0476 
GΓDNSST [13]  70.3518  0.001639  0.0994  0.2848 
NIGNSST [13]  88.3271  0.001693  0.0626  0.1908 
BINSST(1)  81.4881  0.001693  0.0705  0.1877 
WBINSST(1)  91.3149  0.001638  0.1085  0.3195 
BINSST(2)  84.9580  0.001703  0.0651  0.1632 
WBINSST(2)  110.4275  0.001639  0.1093  0.3162 
6 Conclusions
In this paper, three methods in SWT and four approaches in NSST are developed according to BayesShrink, BiShrink, weighted BayesShrink, and weighted BiShrink. For BiShrink implementation in the NSST domain, two models to choose the childparent are proposed with regard to the MI parameter. Although the model recommended by the MI value outperforms for synthesized image with highly detailed content, it is not appropriate for true SAR images and synthesized images with many smooth regions. Because, in this study, we showed that any thresholdingbased methods in the NSST domain outperform the SWT domain, finding new parameters to choose the suitable childparent in the NSST is future research work.
Declarations
Acknowledgements
The authors would like to thank S. Jafari [13] for providing MatLab codes in order to implement GΓD ‐ NSST and NIG ‐ NSST algorithms, and also H. Markarian [15] for providing HighTV codes. Further, we thank Sandia National Laboratories [47] for providing the true SAR image.
Funding
Not applicable
Availability of data and materials
The three images (Farmland, Peninsula, and Shipping Terminal) [46] are part of the SAR images collected by RADARSAT1 in the Fine Beam 2 mode on June 16, 2002. Most of the illuminated scenes were in Delta, British Columbia, Canada. The radar was operating in the Cband with HH polarization. The mini SAR image Aircraft provided by Sandia National Laboratories [47] was collected from the Kirtland AFB region on August 27, 2007 in the Kaband and Kuband.
Authors’ contributions
We confirm that this work is original and has not been published nor is it currently under consideration for publication elsewhere. Both authors read and approved the final manuscript.
Authors’ information
Nikou Farhangi was born in Macoo, Iran, in 1987. She received the B.Sc. degree in telecommunication engineering from Islamic Azad University, Urmiya branch in 2010 and the M.Sc. degree in communication systems from Islamic Azad University, South Tehran Branch, Iran, in 2017, respectively. Her research interests include digital signal and image processing.
Sedigheh Ghofrani was born in 1968 in Ghochan. She received the B.S. degree in electronic engineering from Tehran University, Iran, in 1991, the M.S. degree in communication from Islamic Azad University, South Tehran Branch, Iran, in 1997, and the Ph.D. degree in electronic from Iran University of Science and Technology, Tehran, Iran, in 2004. She has been an Assistant Professor in the Department of Electronic and Electrical Engineering, Islamic Azad University, South Tehran Branch from 2004 to 2011 and an Associate Professor since 2012. In 2003, she spent 8 months at the School of Electronic and Electrical Engineering, the University of Leeds, UK, supported by the British Council foundation. In 2012, she spent 8 months at the Center for Advanced Communications (CAC) at Villanova University, PA, USA, as Visiting Research Professor. Her area of research includes image processing and signal processing.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 H Chen, Y Zhang, H Wang, C Ding, Stationarywaveletbased despeckling of SAR images using twosided generalized gamma models. IEEE Geosci. Remote Sens. Lett. 9(6), 1061–1065 (2012)View ArticleGoogle Scholar
 B Hou, X Zhang, X Bu, H Feng, SAR image despeckling based on nonsubsampled shearlet transform. IEEE J. Selected Topics Appl. Earth Observations Remote Sensing 5(3), 809–823 (2012)View ArticleGoogle Scholar
 DL Donoho, Denoising by softthresholding. IEEE Trans. Inf. Theory 41(3), 613–627 (1995)MathSciNetView ArticleMATHGoogle Scholar
 EJ Candes, DL Donoho, Curvelets: A Surprisingly Effective Nonadaptive Representation for Objects with Edges (DTIC Document, 1999)Google Scholar
 X Zhang, X Jing, Image denoising in contourlet domain based on a normal inverse Gaussian prior. Digit. Signal Process. 20(5), 1439–1446 (2010)View ArticleGoogle Scholar
 N Minh, Directional Multiresolution Image Representations (University of Canberra Bachelor of Engineering in Computer Engineering Citeseer, 2002)Google Scholar
 D. Labate, W.Q. Lim, G. Kutyniok and G. Weiss, Sparse multidimensional representation using shearlets, International Society for Optics and Photonics, pp. 59140U59140U9, 2005.Google Scholar
 K Guo, D Labate, Optimally sparse multidimensional representation using shearlets. SIAM J. Math. Anal. 39(1), 298–318 (2007)MathSciNetView ArticleMATHGoogle Scholar
 WQ Lim, The discrete shearlet transform: a new directional transform and compactly supported shearlet frames. IEEE Trans. Image Process. 19(5), 1166–1180 (2010)MathSciNetView ArticleMATHGoogle Scholar
 G Easley, D Labate, WQ Lim, Sparse directional image representations using the discrete shearlet transform. Appl. Comput. Harmon. Anal. 25(1), 25–46 (2008)MathSciNetView ArticleMATHGoogle Scholar
 GR Easley, D Labate, F Colonna, Shearletbased total variation diffusion for denoising. IEEE Trans. Image Process. 18(2), 260–268 (2009)MathSciNetView ArticleMATHGoogle Scholar
 VM Patel, GR Easley, DM Healy Jr, Shearletbased deconvolution. IEEE Trans. Image Process. 18(12), 2673–2685 (2009)MathSciNetView ArticleMATHGoogle Scholar
 S Jafari, S Ghofrani, Using two coefficients modeling of nonsubsampled Shearlet transform for despeckling. J. Appl. Remote. Sens. 10(1), 015002–015002 (2016)View ArticleGoogle Scholar
 SQ Liu, SH Hu, Y Xiao, YL An, Bayesian shearlet shrinkage for SAR image denoising via sparse representation. Multidim. Syst. Sign. Process. 25(4), 683–701 (2014)View ArticleGoogle Scholar
 H Markarian, S Ghofrani, HighTV based CS framework using MAP estimator for SAR image enhancement. IEEE J. Selected Topics Appl. Earth Observations Remote Sensing 10(9), 4059–4073 (2017)View ArticleGoogle Scholar
 C Yan, Y Zhang, J Xu, F Dai, J Zhang, Q Dai, F Wu, Efficient parallel framework for HEVC motion estimation on manycore processors. IEEE Trans. Circuits Syst. Video Technol. 24, 2077–2089 (2014)View ArticleGoogle Scholar
 C Yan, Y Zhang, J Xu, F Dai, L Li, Q Dai, F Wu, A highly parallel framework for HEVC coding unit partitioning tree decision on manycore processors. IEEE Signal Process. Lett. 21, 573–576 (2014)View ArticleGoogle Scholar
 M. C. Motwani, M. C. Gadiya, R. C. Motwani and F. C. Harris, Survey of image denoising techniques, Proceedings of GSPX, pp. 2730, 2004.Google Scholar
 DL Donoho, JM Johnstone, Ideal spatial adaptation by wavelet shrinkage. Biometrika 81(3), 425–455 (1994)MathSciNetView ArticleMATHGoogle Scholar
 DL Donoho, IM Johnstone, Adapting to unknown smoothness via wavelet shrinkage. J. Am. Stat. Assoc. 90(432), 1200–1224 (1995)MathSciNetView ArticleMATHGoogle Scholar
 HA Chipman, ED Kolaczyk, RE McCulloch, Adaptive Bayesian wavelet shrinkage. J. Am. Stat. Assoc. 92, 1413–1421 (1997)View ArticleMATHGoogle Scholar
 Q. Liu and L. Ni, Image denoising using Bayesian shrink threshold based on weighted adaptive directional lifting wavelet transform, International Society for Optics and Photonics in International Conference on Graphic and Image Processing, pp. 8285708285706, 2011.Google Scholar
 IW Selesnick, Bivariate shrinkage functions for waveletbased denoising exploiting interscale dependency. IEEE Trans. Signal Process. 50(11), 2744–2756 (2002)MathSciNetView ArticleGoogle Scholar
 Z Dexiang, W Xiaopei, G Qingwei, G Xiaojing, SAR image despeckling via bivariate shrinkage based on contourlet transform. IEEE Int. Symp. Comput. Intell. Design 2, 12–15 (2008)Google Scholar
 Q Guo, S Yu, X Chen, C Liu, W Wei, Shearletbased image denoising using bivariate shrinkage with intraband and opposite orientation dependencies. IEEE Int. Joint Conf. Comput. Sci. Optimization 1, 863–866 (2009)Google Scholar
 S. Chitchian, M. Fiddy and N. M. Fried, Denoising during optical coherence tomography of the prostate nerves via bivariate shrinkage using dualtree complex wavelet transform, International Society for Optics and Photonics in SPIE BiOS, Biomedical Optics, pp. 7161127161124, 2009.Google Scholar
 S Jafari, S Ghofrani, M Sheikhan, Comparing undecimated wavelet, nonsubsampled contourlet and shearlet for SAR images despeckling. Majlesi J. Electr. Eng. 9(3) (2015)Google Scholar
 X. Xu, Y. Zhao, W. Zhou and Y. Peng, SAR image denoising based on alphastable distribution and Bayesian wavelet shrinkage, International Society for Optics and Photonics in Sixth International Symposium on Multispectral Image Processing and Pattern Recognition, pp. 74951U74951U8, 2009.Google Scholar
 W Hongzhi, H Cai, Locally adaptive bivariate shrinkage algorithm for image denoising based on nonsubsampled contourlet transform. IEEE Interational Conf. Comput. Mechatronics Control Electron. Eng. 10(6), 33–36 (2010)Google Scholar
 X Wang, RS Istepanian, YH Song, Microarray image enhancement by denoising using stationary wavelet transform. IEEE Trans. NanoBioscience 2(4), 184–189 (2003)View ArticleGoogle Scholar
 D Min, Z Jiuwen, M Yide, Image denoising via bivariate shrinkage function based on a new structure of dual contourlet transform. Signal Process. 109, 25–37 (2015)View ArticleGoogle Scholar
 F. Lenzen, “Statistical regularization and denoising,” Dissertation in Mathematics submitted for the degree of Doctor rerum naturalium, Chap. 1 (2006)Google Scholar
 A Hyvärinen, Sparse code shrinkage: denoising of nonGaussian data by maximum likelihood estimation. Neural Comput. 11(7), 1739–1768 (1999)View ArticleGoogle Scholar
 DX Zhang, QW Gao, XP Wu, Bayesian based speckle suppression for SAR image using contourlet transform. J. Electron Sci. Technol. China 6(1), 79–82 (2008)Google Scholar
 L Şendur, IW Selesnick, Bivariate shrinkage with local variance estimation. IEEE Signal Process. Lett. 9(12), 438–441 (2002)View ArticleGoogle Scholar
 S Xing, Q Xu, D Ma, Speckle denoising based on bivariate shrinkage functions and dualtree complex wavelet transform. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 38, 1–57 (2008)Google Scholar
 S Dahlke, G Kutyniok, P Maass, C Sagiv, HG Stark, G Teschke, The uncertainty principle associated with the continuous shearlet transform. Int. J. Wavelets Multiresolution Inf. Process. 6(02), 157–181 (2008)MathSciNetView ArticleMATHGoogle Scholar
 G. Andria, F. Attivissimo, A. M. L. Lanzolla, and M. Savino, A suitable threshold for speckle reduction in ultrasound images, IEEE Transaction on Instrumentation and Measurement, vol. 62, no. 8, Agust 2013.Google Scholar
 Z Wang, AC Bovik, HR Sheikh, EP Simoncelli, Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)View ArticleGoogle Scholar
 D Brunet, ER Vrscay, Z Wang, On the mathematical properties of the structural similarity index. IEEE Trans. Image Process. 21, 1488–1499 (2012)MathSciNetView ArticleMATHGoogle Scholar
 J Zhang, TM Le, S Ong, TQ Nguyen, Noreference image quality assessment using structural activity. Signal Process. 91(11), 2575–2588 (2011)View ArticleGoogle Scholar
 M Kiani, S Ghofrani, Two new methods based on contourlet transform for despeckling synthetic aperture radar images. J. Appl. Remote. Sens. 8(1), 083604–083604 (2014)View ArticleGoogle Scholar
 XY Wang, YC Liu, HY Yang, Image denoising in extended shearlet domain using hidden Markov tree models. Digit. Signal Process. 30, 101–113 (2014)View ArticleGoogle Scholar
 TM Cover, JA Thomas, Elements of Information Theory (John Wiley & Sons, 2012)Google Scholar
 GS Shin, MG Kang, Waveletbased denoising considering interscale and intrascale dependences. Opt. Eng. 44, 067002–067009 (2005)View ArticleGoogle Scholar
 IG Cumming, FH Wong, Digital processing of synthetic aperture radar data. Artech house 1, 3 (2005)Google Scholar
 Armin W. Doerry Automatic compensation of antenna beam rolloff in SAR images, Department of Energy in United States, 2006.Google Scholar
 L Gomez, ME Buemi, JC JacoboBerlles, ME Mejail, A new image quality index for objectively evaluating despeckling filtering in SAR images. IEEE J. Selected Topics Appl. Earth Observations Remote Sensing 9, 1297–1307 (2016)View ArticleGoogle Scholar