 Research
 Open Access
 Published:
Using BayesShrink, BiShrink, Weighted BayesShrink, and Weighted BiShrink in NSST and SWT for Despeckling SAR Images
EURASIP Journal on Image and Video Processing volume 2018, Article number: 4 (2018)
Abstract
Synthetic aperture radar (SAR) images are inherently degraded by multiplicative speckle noise where thresholdingbased methods in the transform domain are appropriate. Being sparse, the coefficients in the transformed domain play a key role in the performance of any thresholding methods. It has been shown that the coefficients of nonsubsampled shearlet transform (NSST) are sparser than those of stationary wavelet transform (SWT) for either clean or noisy images. Therefore, it is expected that thresholdingbased methods in NSST outperform those in the SWT domain. In this paper, BayesShrink, BiShrink, weighted BayesShrink, and weighted BiShrink in NSST and SWT domains are compared in terms of subjective and objective image assessment. As BayesShrink try to find the optimum threshold for every subband, BiShrink uses coefficients, name “parent,” to clean up coefficients called “child,” and the weighted methods consider the coefficients’ noise efficiency, which implies that subbands in the transform domain may be affected by noise differently. Two models for considering the parent in the NSST domain are proposed. In addition, for both BayesShrink and BiShrink, considering the weighting factor (coefficients noise efficiency) would improve the performance of the corresponding methods as well. Experimental results show that the weightedBiShrink despeckling approach in the NSST domain gives an outstanding performance when tested with both artificially speckled images and real SAR images.
Introduction
Synthetic aperture radar (SAR) can be used in a wide variety of applications in the military, geology, scientific discoveries, mapping, and surveillance of Earth. The main advantages of SAR is its ability to operate under diverse weather conditions such as darkness, rain, snow, fog, and dust, where SAR exhibits speckle noise. It must be stressed that speckle is noiselike, but it is not noise; it is a real electromagnetic measurement. Therefore, when the radar scans a uniform surface, the SAR images emerge as dramatic changes in gray, with some resolution cells shown as a dark spot, and others shown as a bright spot, depicting granular ups and downs. The spots rooted in a coherent superposition of the radar echo are called speckle noise.
For any coherent imaging like SAR, such as sonar and ultrasound, despeckling is an important process for image enhancement. Removing speckles and preserving edges are the main goals of enhancement approaches. In general, the despeckling of SAR images is carried out in either the spatial or transformed domain [1]. Despite their low computational complexity, the performance of spatial domain filters is often not as well as the transformed domain algorithms [2]. Wavelet [3] is a wellknown multiscale transform that can effectively mitigate point singularities for onedimensional signals. For linear singularities in images, twodimensional separable wavelets were used. However, the lack of directionality motivated researchers to propose the curvelet [4] and contourlet [5, 6] methods, which use the multiscale transform followed by the directional filter bank. Their basis functions with wedgeshaped or rectangular support regions provide good sparse representations for high dimensional singularities. Recently, shearlet transform (ST) based on an affine system [7], which can sparsely represent an image and has flexible orientation, has been proposed [8]. This new representation is based on a simple and rigorous mathematical framework that not only provides a more flexible theoretical tool for the geometric representation of multidimensional data, but is also easy to implement. In addition, shearlet exhibits highly directional sensitivity and is spatially localized [8,9,10]. ST has been applied in various practical problems such as total variation for denoising [11], deconvolution [12], SAR despeckling [2, 13], and Bayesian shearlet shrinkage for SAR despeckling via sparse representation [14]. Further, Markarian and Ghofrani [15] proposed a new method based on compressive sensing for speckle reduction of SAR images. However, image processing and video coding have made remarkable progress in recent years [16, 17].
Thresholding is a common method for denoising in the transform domain [18], where finding the optimum threshold value is the main problem. Regarding the methods, VISUShrink [19] obtains the universal threshold value, whereas SUREShrink [20], BayesShrink [21, 22], and bivariate shrinkage (BiShrink) [23,24,25,26] obtain the threshold values adaptively for every subband. Among these approaches, BayesShrink is a wellknown method used in the nonsubsampled shearlet transform (NSST) domain [27], and BiShrink functions using the Bayesian estimation theory applied in the wavelet [20, 23, 28], contourlet [24, 29], and shearlet [25] domains are used as well.
In this paper, we compare the performances of BayesShrink, BiShrink, weighted BayesShrink, and weighted BiShrink in NSST and stationary wavelet transform (SWT) domains in terms of subjective and objective image assessment. As BayesShrink tries to find the optimum threshold for every subband, BiShrink uses coefficients named parent to clean up coefficients called child, and the weighted methods consider the coefficients’ noise efficiency, which imply that the subbands in the transform domain may be affected by noise differently. Two models for considering the parent in the NSST domain are proposed. In addition, for both BayesShrink and BiShrink, considering the weighting factor (coefficients noise efficiency) would improve the performance of the corresponding methods as well. The novel Bishrink despeckling method named BINSST is developed, and the weighted Bishrink are used in NSST and SWT domains for the first time, where the approaches are named WBINSST and WBISWT, respectively. Considering the coefficients’ noise efficiency in SWT and NSST to obtain the weighting factor and the optimum threshold value is the main contribution of this paper. However, the performance of three proposed methods in SWT and four new approaches in NSST are compared with five stateoftheart papers ([2, 13, 15, 27, 30]) in terms of subjective and objective image evaluations when artificially speckled and real SAR images are denoised.
The paper is organized as follows: Section 2 presents the preliminary of the speckle noise model, the BayesShrink and BiShrink methods, as well as noise estimations and signal variances. Further, ST and the following NSST are explained in Section 3. The proposed methods based on BiShrink in the NSST domain are explained in Section 4. Section 5 shows the experimental results and finally, Section 6 concludes the paper.
Thresholdbased SAR image despeckling
Determining the optimum threshold value is the main problem in any thresholdingbased method. VISUShrink [19] obtains the universal threshold value whereas SUREShrink [20], BayesShrink [21, 22, 28], and BiShrink [23,24,25,26, 31] determine the adaptive threshold values for every subband. In the following, the speckle noise model is introduced, the BayesShrink and BiShrink methods are explained, and finally, the median for power noise estimation in the transformed domain is expressed.
Speckle noise model
In general, for any coherent imaging systems such as SAR, multiplicative speckle noise degrades the image. The speckle noise is modeled as,
where I_{ x } and I_{ y } refer to the noise free signal and the observed signal, and N_{ s } is speckle noise in spatial domain. Writing Eq. (1) as, I_{ y } = I_{ x }(1 + N_{ s }). in order to convert the multiplicative noise into additive noise, the homomorphic framework is used, which means using the logarithm transform before processing and the exponential transform at the end (see Fig. 1),
where y = log(I_{ y }), x = log(I_{ x }), and n = log(1 + N_{ s }) are in order noisy signal, noise free signal, and additive noise.
BayesShrink denoising in transformed domain
As said above, the multiplicative noise is converted into the additive noise by using logarithm. So, applying a linear transform to Eq. (2), we have,
where k refers to the decomposition subband, Y_{ k }, X_{ k }, and N_{ k } are noisy, clean, and noise coefficients in order. The goal of Bayesian denoising method [21] is estimating \( {\widehat{X}}_k \), from the observed data, Y_{ k }. In order to simplify the notations, the superscript k, which indicates the subband, is dropped off (i.e., Y = X + N). For Eq. (3), Bayesian maximum a posteriori (MAP) estimator is [5, 23, 32],
According to the Bayes rule, the conditional probability density function (PDF) is \( p\left(XY\right)=\frac{p\left(YX\right)p(X)}{p(Y)} \); therefore, ignoring p(Y) because of being constant, \( \widehat{X}(Y) \) is,
where p_{ X } is the prior distribution of the noise free coefficients and p_{ N } is the noise PDF assumed zeromean Gaussian withvariance \( {\sigma}_N^2 \) [13, 27], \( {p}_N(N)=\frac{1}{\sigma_N\sqrt{2\pi }}\exp \left(\frac{N^2}{2{\sigma}_N^2}\right) \).
By applying the logarithm function, Eq. (5) is written as,
where f(X) = log(p_{ X }(X)) [33]. Finding \( \widehat{X} \) is equivalent to solve \( \frac{Y\widehat{X}}{\sigma_N^2}+{f}^{\prime}\left(\widehat{X}\right)=0 \).
If p_{ X }(X) is assumed Gaussian with zero mean and variance σ^{2}, then \( f(X)=\log \left(\sqrt{2\pi}\sigma \right){X}^2/2{\sigma}^2 \) and the estimated \( \widehat{X}(Y) \) is [23, 34],
If p_{ X }(X) is assumed Laplace with zero mean and variance σ^{2}, \( {p}_X(X)=\frac{1}{\sqrt{2}\sigma}\exp \left(\frac{\sqrt{2}\mid X\mid }{\sigma}\right), \) then \( f(X)=\log \left(\sigma \sqrt{2}\right)\sqrt{2}\mid X\mid /\sigma \) and the estimated \( \widehat{X}(Y) \) is [23],
Eq. (8) is the classical soft shrinkage function [33] defined as,
Based on the classical soft shrinkage function, Eq. (8) is rewritten \( \widehat{X}(Y)=\mathrm{soft}\left(Y,\frac{\sqrt{2}{\sigma}_N^2}{\sigma}\right) \).
BiShrink denoising in transformed domain
In 2002, a novel denoising method named BiShrink was proposed [23, 35]. In fact, BiShrink is a simple nonlinear shrinkage function in the transformed domain, with subbands known as child and parent. To obtain the threshold value for denoising a child subband, BiShrink also uses the coefficients of the parent subband.
Suppose X_{1} and X_{2} as child and parent coefficients in transformed domain, then the vector form of Eq. (3) is,
where X = (X_{1}, X_{2}), Y = (Y_{1}, Y_{2}), and N = (N_{1}, N_{2}). Similar to the Bayesian MAP estimator explained in Section 2.2, the PDFs of the clean coefficients p_{ X }(X) and the noise coefficients p_{ N }(N) are to be known for estimating the noise free coefficients \( \widehat{X}(Y) \) from the observed data. In literatures [23, 24, 36], noise is often assumed Gaussian,
and in [23], the noisefree coefficients p_{ X }(X) were given by,
According to Eq. (6), the bivariate MAP estimator is,
Estimating \( {\widehat{X}}_1 \) and \( {\widehat{X}}_2 \) needs solving equations, \( \frac{Y_1{\widehat{X}}_1}{\sigma_N^2}+{f}_1\left(\widehat{X}\right)=0 \) and \( \frac{Y_2{\widehat{X}}_2}{\sigma_N^2}+{f}_2\left(\widehat{X}\right)=0 \) where f_{1} and f_{2} represent the derivative of f(X) with regard to X_{1} and X_{2}, respectively. According to Eq. (13), we have \( f(X)=\log \left(\frac{3}{2{\pi \sigma}^2}\right)\frac{\sqrt{3}}{\sigma}\sqrt{X_1^2+{X}_2^2} \), \( {f}_1(X)=\frac{df(X)}{dX_1}=\frac{\sqrt{3}{X}_1}{\sigma \sqrt{X_1^2+{X}_2^2}} \), and \( {f}_2(X)=\frac{df(X)}{dX_2}=\frac{\sqrt{3}{X}_2}{\sigma \sqrt{X_1^2+{X}_2^2}} \). Under assuming that the noise power of different subbands is invariant [36], i.e., \( {\sigma}_{N_1}={\sigma}_{N_2}={\sigma}_N \), and defining \( r=\sqrt{X_1^2+{X}_2^2} \), we have:
Substituting Eq. (17) into Eq. (15), the joint shrinkage function is [36],
In BiShrink denoising method based on Eq. (18), dead zone region where \( {\widehat{X}}_1(Y)=0 \) is,
To compare the NSST and SWT BiShrink functions and to observe the circular shape dead zone, we have chosen the “Barbara” image of size 512 × 512 pixels and 256 Gy levels. The test image is corrupted by the additive Gaussian noise with two noise levels (σ_{ N } = 10, 30), and the corresponding dead zone region based on Eq. (19) are shown in Fig. 2. As shown in Eq. (19) and observed in the image, a direct relationship exists between the radius of the circularlike dead zone and the noise power. Since NSST is sparser than SWT, it yields the smaller radius of dead zone.
Estimating noise and signal variance
The noise and signal variance \( {\widehat{\sigma}}_N^2 \) and \( {\widehat{\sigma}}^2 \), respectively, are required for the implementation both the BayesShrink [21] and BiShrink [23] algorithms. Using the median estimator [34] in the transformed domain for noise variance estimation is common:
where Y_{ℓ} refers to the coefficients of ℓth decomposition level. For any arbitrary subband in the transformed domain, \( {\sigma}_Y^2={\sigma}^2+{\sigma}_N^2 \). Since the observed signal in the transformed domain is modeled as zero mean [27], the variance of each subband is obtained by
where M is the size of squareshaped window. The variances of every window are obtained, and the average value is computed. Using \( {\widehat{\sigma}}_Y^2 \) and \( {\widehat{\sigma}}_N^2 \), the signal standard deviation \( \widehat{\sigma} \) is [33]
Nonsubsampled shearlet transform (NSST)
A novel multiscale directional representation system called shearlets was proposed in 2005 [7]. Two properties, multiresolution and sparsity, render the ST attractive in science and engineering [10, 11, 37]. In the following, the continuous and discrete STs are explained briefly.
The continuous shearlet transform for an arbitrary signal f is:
where a ∈ R^{+}, s ∈ R, and t ∈ R^{2} refer to the scaling, shearing, and translation parameters, respectively, and the shearlet function is given by \( {\psi}_{as t}(x)={a}^{3/4}\psi \left({M}_{as}^{1}\left(xt\right)\right) \) and \( {\mathrm{M}}_{\mathrm{as}}=\left(\begin{array}{cc}1& s\\ {}0& 1\end{array}\right)\left(\begin{array}{cc}a& 0\\ {}0& \sqrt{a}\end{array}\right)=\left(\begin{array}{cc}a& \sqrt{a}s\\ {}0& \sqrt{a}\end{array}\right) \).
The discrete version of ST [10] used in digital signal processing is
where j, ℓ ∈ Z, k ∈ Z^{2}, ψ_{j, ℓ, k} = detA_{0}^{j/2}ψ(B_{0}^{j}A_{0}^{ℓ}x − k), and \( {\mathrm{A}}_0=\left(\begin{array}{cc}4& 0\\ {}0& 2\end{array}\right) \) and \( {B}_0=\left(\begin{array}{cc}1& 1\\ {}0& 1\end{array}\right) \).
In general, using subsampling operations causes variant shifts in a transform. Therefore, by omitting the up and downsampling blocks, SWT [30] in 2003 and NSST [10] in 2008 were proposed. In the nonsubsampled transform version, as the coefficients do not decimate between the decomposition levels, all subband sizes are the same as the original input image. Therefore, SWT and NSST require more computation and storage room in comparison with the conventional WT and ST.
NSST, as a multiscale directional representation, is able to give a good sparse representation of an image. To show the sparsity of NSST in contrast with SWT, the Barbara of size 512 × 512 pixels and 256 Gy levels is used. The original test image and the one corrupted by Gaussian noise are decomposed by SWT and NSST into three levels. As shown in Fig. 3, for SWT in each decomposition level, there are three subbands, whereas for NSST there are 16, eight, and four subbands at the first, second, and third levels, respectively. Figure 3 shows the histogram of SWT (1st and 4th subbands) and NSST (9th and 1st subbands, all subbands to the next coarser) coefficients. The histograms in Fig. 3 conclude that the NSST is sparser than the SWT for both noisy and noisefree images. Sparsity, which means that most coefficients are approximately zero, plays a key role in any thresholding algorithm. The sparsity is a typical characteristic of the transform domain where noise is uniformly spread throughout all coefficients, and the data is represented by a small subset of big coefficients [38]. Therefore, coefficients with small magnitudes can be considered as noise and set to zero. The approach in which each coefficient is compared with a threshold in order to decide whether it constitutes a desired part of the original data is called the thresholding approach. Obviously, using a thresholding approach in a sparser transform is outstanding; since the basic functions of the NSST have multidirectional wedgeshaped support regions [2], it provides better sparse representation than the SWT.
In this paper, in addition to showing the histograms, we also compute the average standard deviation (Sd) for an objective comparison between NSST and SWT in terms of sparsity. In this regard, the Sd parameter of all normalized subbands are obtained and the average Sd for noisefree and noisevariant “Lena” and Barbara images with sizes 512 × 512 are presented in Table 1. Increasing the power noise corresponds to a bigger Sd value. However, for both test images, and under different noise powers, the NSST is sparser than the SWT. Therefore, as anticipated, the performance of the thresholdbased denoising filter is better in the NSST domain than the SWT domain.
Proposed methods
In the first part of this section, the image assessment parameters to evaluate denoising methods are explained, and the mutual information (MI) to measure the statistical dependency between a child and its corresponding parent coefficients is expressed. Subsequently, the models for BiShrink in the transformed domain are proposed and the weighted BiShrink in the NSST and SWT domains are applied for the first time.
For Figs. 4 and 5, we have used eight test images, named Barbara, Lena, House, Boat, Goldhill, Fingerprint, Cameraman, and Peppers, of size 512 × 512 pixels and 256 Gy levels. To achieve measurement reliability in a noisy environment, the algorithm was run 10 times for the processing of every image; therefore, the average values for 80 (i.e., 8 × 10) completely independent trials are obtained.
Figure 1 shows the block diagram of the BiShrink despeckling method in the NSST and SWT domains. In general, the BiShrink denoising algorithm consists of a threestep process: estimating \( {\widehat{\sigma}}_N \) for every subband according to Eq. (20), estimating \( {\widehat{\sigma}}_Y^2 \) and \( {\widehat{\sigma}}^2 \) based on Eqs. (21) and (22), and obtaining the noisefree coefficients using Eq. (18).
Image assessment parameters
Among the image assessment parameters used to evaluate the performance of a despeckling algorithm, in this paper, we have chosen the peak signaltonoise ratio (PSNR) [39] and structural similarity (SSIM) [40] as full references and equivalent number of looks (ENL) [13], respectively; and mean square difference (MSD) [41] and edge save index (ESI) [13] as no references.
PSNR measures an image quality:
where \( \mathrm{MSE}=\frac{1}{mn}\sum \limits_{i=1}^m\sum \limits_{j=1}^n{\left[{\widehat{I}}_x\left(i,j\right){I}_x\left(i,j\right)\right]}^2 \) is the mean square error (MSE), mn is the image size, and I_{ x } and \( {\widehat{I}}_x \) are the input and retrieved images, respectively (see Fig. 1 for the mentioned notations).
The SSIM index measures the similarity between the original and the despeckled image through a local statistical analysis (i.e., mean, variance, and covariance between the unfiltered and despeckled pixel values from the sliding window). SSIM ∈ (‐1, 1) and a bad similarity between the original and the despeckled image corresponds to SSIM → ‐ 1, whereas a good similarity will be indicated by values SSIM → 1.
ENL and MSD both measure the speckle suppression:
where \( \mathrm{NV}=\frac{1}{mn}\sum \limits_{i=1}^m\sum \limits_{j=1}^n{\left[{\widehat{I}}_x\left(i,j\right)\overline{{\widehat{I}}_x}\right]}^2 \) and \( \overline{{\widehat{I}}_x}=\frac{1}{mn}\sum \limits_{i=1}^m\sum \limits_{j=1}^n{\widehat{I}}_x\left(i,j\right) \). As ENL carries no information about the image resolution degradation, it is often used jointly with other parameters such as MSD. Large values for ENL and MSD indicate significant filtering. As ENL is to be computed over a uniform region, the image is divided into cells of size 16 × 16 and 25 × 25 pixels, where the ENL is computed for every block and finally, averaged to obtain the ENL value.
The ESI [13] reflects the edge preservation capability of a despeckling technique and is measured in both the horizontal and vertical directions:
MI [25, 42,43,44], a parameter for measuring the dependency between X_{1} as the child and X_{2} as the parent, is expressed as
where (x_{1}, x_{2}) is a pair of random variables with joint p(x_{1}, x_{2}) and marginal p(x_{1}) and p(x_{2}) PDFs. The MI or I (X_{1}, X_{2}) is zero if the child and parent are totally independent. However, increasing the MI means more dependency between the child and parent coefficients in the transformed domain. Therefore, the best childparent coefficients are those that are totally dependent (i.e., the most positive MI).
Parent and child coefficient models
Although a fair amount of research on image denoising in the transformed domain has been carried out [31, 35], thresholding due to simplicity is still attractive [19, 20]. However, thresholding in a bivariate MAP that exploits the dependency between coefficients [25, 45] gives appropriate results.
The implementation of BiShrink for denoising, based on Eq. (18), requires a child coefficient X_{1} and its parent coefficient X_{2}. In the SWT transformed domain, for an arbitrary coefficient considered as child, X_{1}, three parents named X_{2}(N), X_{2}(SS), and X_{2}(NC) can be considered where X_{2}(N) refers to the neighbor subband at the same level, X_{2}(SS) is a subband at the same orientation but at the next coarser level, and X_{2}(NC) denotes all the subbands that belong to the next coarser level. For example, by noticing the numbered SWT subbands shown in Fig. 3, if the 1st subband is X_{1}, the 2nd or 3rd subband can be considered as X_{2}(N), the 4th subband is X_{2}(SS), and all 4th–6th subbands are X_{2}(NC). Similarly, in the NSST transformed domain, for an arbitrary child X_{1}, three parents called X_{2}(N), X_{2}(NC), and X_{2}(OPP) are considered, in which X_{2}(OPP) refers to a subband at the same level as X_{1} but in the opposite orientation. For example, by noticing the numbered subbands for the NSST shown in Fig. 3, if the 9th subband is X_{1}, the 8th or 10th subband can be considered as X_{2}(N), and all 17th–24th subbands are X_{2}(NC), and the 1st subband is X_{2}(OPP). For the sample image “Zone Plate,” the childparent in the SWT and NSST domains are shown in Fig. 6.
As mentioned in Section 4.1, the best childparent coefficients are those that are totally dependent (the most value of MI). We now use the eight test images mentioned and add Gaussian noise with zero mean and standard deviation σ_{ N } = 30. Noisy images are decomposed by SWT and NSST into three levels, and the MI for different subbands as a child considering the three introduced models are obtained. As expected and shown in Fig. 4, the MI for both transformed domains is not zero because of the dependency between a child and the inter or intrasubbands. Figure 4 shows that I(X_{1}, X_{2}(NC)) > I(X_{1}, X_{2}(SS)) > I(X_{1}, X_{2}(N)) for the SWT, and I(X_{1}, X_{2}(NC)) > I(X_{1}, X_{2}(N)) > I(X_{1}, X_{2}(OPP)) for the NSST. In this paper, we use X_{2}(SS) as the classical parent in the SWT domain, based on previous studies [23, 26, 28, 35]. For the NSST domain, we propose the following two following:

Model 1: considering X_{2}(OPP) as the parent, the method is named BINSST (1).

Model 2: considering X_{2}(NC) as the parent, the method is named BINSST (2).
Although model 1 was proposed for the ST domain [25], we used it in the NSST domain as well. In addition, proposing model 2 in the NSST domain according to the MI shown in Fig. 4 is the contribution herein, where model 2 is expected to outperform model 1 in the NSST domain.
Weighted shrinkage method
Most shrinkage denoising techniques, including BiShrink [23, 25, 31, 35], assume that the noise power for different subbands are the same. Although this assumption for WT [23] is true, it was shown to be not entirely correct for the nonsubsampled contourlet transform [42] and NSST [13, 27] (i.e., noise power of different subbands are not the same). Although the weighted BayesShrink in the NSST domain [27] was previously used, the weighted BiShrink in the NSST domain, called the WBINSST method, is proposed in this paper. To show the validity of the assumption above, the original noisefree test images and the noisy images are decomposed to three levels by SWT and NSST, and the MSE of noisefree and noisy image coefficients are obtained for each decomposition level ℓ and subband k,
As the nonsubsampled version is used, all subband sizes are the same as the input image (i.e., mn). The MSE of all subbands for the eight mentioned test images in the SWT and NSST domains is obtained, and the average values are shown in Fig. 5. As expected, all subbands of the SWT (Fig. 5a2) are affected by noise approximately equal to the subbands in NSST where some subbands are more robust against noise than others (Fig. 5b2).
According to Eq. (18), the BiShrink threshold values [36] named TBI for every decomposition level ℓ and subband k is
where \( {\sigma}_{X_{\mathrm{\ell},k}}^2 \) (or σ^{2}) is the power of the noisefree signal in the transformed domain. In this paper, the noise variance is approximated by the robust median estimator of Eq. (20), and the power of the noisefree signal is estimated using Eq. (22); thus, the Bishrink weighted threshold named TWBI is
where α is the weighting factor that depends on the decomposition level ℓ and subband k, expressed as
where \( \overline{{\mathrm{MSE}}_{\mathrm{\ell}}} \) is the average MSE of all subbands that belong to the same level, \( \overline{{\mathrm{MSE}}_{\mathrm{\ell}}}=\frac{1}{K_{\mathrm{\ell}}}\sum \limits_{k=1}^{K_{\mathrm{\ell}}}{\mathrm{MSE}}_{\mathrm{\ell},k} \) and K_{ℓ} is the number of subbands for the ℓ − th level decomposition.
Using the weighting factor, α_{ℓ, k} results in the optimum threshold value in Eq. (33), which is then applied to the coefficients as the soft thresholding in Eq. (18). The corresponding methods are named WBISWT, WBINSST(1), and WBINSST(2).
Obtaining the values of α_{ℓ, k} is the main challenge for implementing the weighted Shrinkage method. In this paper, two approaches for determining the weighting factor, α_{ℓ, k}, for test images and real SAR images are proposed. For the test images, the MSE between the noiseless and noisy coefficients gives the optimum weighting factor, α_{ℓ, k} at each decomposition level and for every subband as well, see Fig. 5a3, b3. In real applications, including SAR images, clean or noisefree signals are nonexistent. Therefore, a white or a flat image (whose pixels have the same gray scale) with the same size as the input image (512 × 512 pixels herein) is used as the noiseless signal. As mentioned above and with regard to Eq. (34), the MSE between the noiseless and noisy coefficients gives the optimum weighting factor, α_{ℓ, k} at each decomposition level and for every subband as well, see Fig. 7. According to Fig. 5a3, b3, we conclude that the values of the weighting factor are irrespective of either image type or noise variance, but depend on the transform or the coefficients’ noise efficiency. In the next section, the obtained weighting factors, α_{ℓ, k} shown in Fig. 7, are used for any weighted versions of shrinkage methods in the NSST or SWT domains when the purpose is to despeckle either the test images or the true SAR images.
Experimental results and discussion
In this paper, we used the images Barbara, Lena, House, Boat, Goldhill, Fingerprint, Cameraman, and Peppers of size 512 × 512 pixels and 256 Gy levels as the test images: images Farmland, Peninsula, and Shipping Terminal [46] of sizes 500 × 500 pixels and Aircraft [47] of size 512 × 512 pixels as the real SAR images. The three images (Farmland, Peninsula, and Shipping Terminal) [46] are part of the SAR images collected by RADARSAT1 in the Fine Beam 2 mode on June 16, 2002. Most of the illuminated scenes was in Delta, British Columbia, Canada. The radar was operating in the Cband with HH polarization. These three parts have good image characteristics such as having grains, as well as many high and lowfrequency parts. The mini SAR image “Aircraft” [47] was collected from the Kirtland AFB region on August 27, 2007 in the Kaband and Kuband.
An input signal is decomposed into three levels using SWT and NSST. According to the numbered subbands shown in Fig. 5a1, b1, the SWT has three subbands in each level whereas the NSST has 16, eight, and four subbands for the 1st, 2nd, and 3rd decomposition levels, respectively. The block diagram in Fig. 1 indicates that the homomorphic framework (using the logarithm function at the first and exponential at the end) is used for both test images and real SAR images. The method to obtain the threshold value for the shrinkage methods and the weighted versions are explained in detail in Section 4.3.
Here, in the SWT domain, we evaluate the performance of BayesShrink (BSWT) [30], weighted BayesShrink (WBSWT), BiShrink (BISWT), and weighted BiShrink (WBISWT); in the NSST domain, we present the results achieved from BayesShrink (BNSST) [2], weighted BayesShrink (WBNSST) [27], BiShrink based on model 1 and model 2 ((BINSST(1), BINSST(2)), and weighted BiShrink based on model 1 and model 2 ((WBINSST(1), WBINSST(2)) in terms of subjective and objective criteria. Furthermore, two methods in the NSST domain, GΓD ‐ NSST [13], NIG ‐ NSST [13], and a highorder total variation method based on compressive sensing called HighTV [15] are also applied to compare the achieved performance among the proposed methods and stateoftheart papers under the same circumstances. For this purpose, we had to implement all the methods ([2, 13, 15, 27, 30]).
For Barbara as a sample test image, which is corrupted by speckle noise with variance \( {\sigma}_N^2=0.1 \), the image results in the SWT and NSST domains are shown in Fig. 8. The objective full reference parameters [39, 40] (PSNR, SSIM) are obtained (for average values, the algorithm was run 30 times) for the test images and presented in Tables 2 and 3. Although all methods in the NSST domain outperform those in the SWT domain, no considerable improvement of weighted versions over the direct ones (for example WBNSST versus BNSST) was observed for artificial speckle noise.
Using Air Craft of size 512 × 512 pixels as a real SAR image (see Fig. 9) shows that the proposed methods in the NSST domain is better than the approaches in the SWT domain based on visual qualification, i.e., noise reduction and edge preservation. Table 4, according to four no reference parameters, also proves that for “Peninsula” of size 512 × 512 pixels as a real SAR image, not only do the methods in the NSST domain outperform those in the SWT domain but also the performance of the weighted versions are significantly better than the directed ones; for example, see WBINSST(2) and BINSST(2).
According to the results above for artificial speckle noise, and the shrinkage methods in the SWT domain, in the following, we only considered the real SAR images, using four proposed methods in the NSST domain, and compared them with [13] and [15]. While the visual results shown in Fig. 10 do not distinguish the methods precisely, the no reference parameters presented in Table 5 indicate that WBINSST(2) is the best approach. Since the ENL is one of the very important parameters that indicate speckle suppression in real SAR images, in this paper, we compute this parameter in two ways: (1) splitting an image into blocks with sizes 16 × 16 and 25 × 25 pixels for the real SAR images with size 500 × 500 and 512 × 512, respectively, obtaining the ENL for every block, and then writing the mean value ENL in Tables 4 and 5; (2) considering a homogeneous region of size 50 × 50 pixels and obtaining the ENL value named as ENL^{h}, see Table 6.
At the end, Fig. 11 shows the ratio images [2, 48] for Farmland, Peninsula, and Shipping Terminal using four proposed methods in the NSST domain and compared with [13] and [15] by considering \( {I}_{\mathrm{ratio}}={I}_y/{\overset{\frown }{I}}_x \), where I_{ y } is a real SAR image and \( {\overset{\frown }{I}}_x \) is the despeckled image. In general, the ratio image provides significant information on speckle suppression and edge preservation. Any geometric structures or details correlated with the original image in the ratio image indicates that some possible relevant information (e.g., edges or bright scatterers) have been removed or modified using the despeckling method. Note that any content within the ratio image apart from the pure speckle indicates that some modification on the nonhomogeneous areas (edges, mainly) has been performed by the filter. An ideal filter would not alter such edges or bright scatterers, and therefore, ratio images would show a pure speckle pattern. Therefore, if the ratio image does not have any structure (completely noisy shape) or does not show any edges or details from the inside, the algorithm is appropriate. In this case, see Fig. 11 that shows the image ratios of our proposed methods in the NSST domain in comparison with references [13] and [15].
Conclusions
In this paper, three methods in SWT and four approaches in NSST are developed according to BayesShrink, BiShrink, weighted BayesShrink, and weighted BiShrink. For BiShrink implementation in the NSST domain, two models to choose the childparent are proposed with regard to the MI parameter. Although the model recommended by the MI value outperforms for synthesized image with highly detailed content, it is not appropriate for true SAR images and synthesized images with many smooth regions. Because, in this study, we showed that any thresholdingbased methods in the NSST domain outperform the SWT domain, finding new parameters to choose the suitable childparent in the NSST is future research work.
References
 1.
H Chen, Y Zhang, H Wang, C Ding, Stationarywaveletbased despeckling of SAR images using twosided generalized gamma models. IEEE Geosci. Remote Sens. Lett. 9(6), 1061–1065 (2012)
 2.
B Hou, X Zhang, X Bu, H Feng, SAR image despeckling based on nonsubsampled shearlet transform. IEEE J. Selected Topics Appl. Earth Observations Remote Sensing 5(3), 809–823 (2012)
 3.
DL Donoho, Denoising by softthresholding. IEEE Trans. Inf. Theory 41(3), 613–627 (1995)
 4.
EJ Candes, DL Donoho, Curvelets: A Surprisingly Effective Nonadaptive Representation for Objects with Edges (DTIC Document, 1999)
 5.
X Zhang, X Jing, Image denoising in contourlet domain based on a normal inverse Gaussian prior. Digit. Signal Process. 20(5), 1439–1446 (2010)
 6.
N Minh, Directional Multiresolution Image Representations (University of Canberra Bachelor of Engineering in Computer Engineering Citeseer, 2002)
 7.
D. Labate, W.Q. Lim, G. Kutyniok and G. Weiss, Sparse multidimensional representation using shearlets, International Society for Optics and Photonics, pp. 59140U59140U9, 2005.
 8.
K Guo, D Labate, Optimally sparse multidimensional representation using shearlets. SIAM J. Math. Anal. 39(1), 298–318 (2007)
 9.
WQ Lim, The discrete shearlet transform: a new directional transform and compactly supported shearlet frames. IEEE Trans. Image Process. 19(5), 1166–1180 (2010)
 10.
G Easley, D Labate, WQ Lim, Sparse directional image representations using the discrete shearlet transform. Appl. Comput. Harmon. Anal. 25(1), 25–46 (2008)
 11.
GR Easley, D Labate, F Colonna, Shearletbased total variation diffusion for denoising. IEEE Trans. Image Process. 18(2), 260–268 (2009)
 12.
VM Patel, GR Easley, DM Healy Jr, Shearletbased deconvolution. IEEE Trans. Image Process. 18(12), 2673–2685 (2009)
 13.
S Jafari, S Ghofrani, Using two coefficients modeling of nonsubsampled Shearlet transform for despeckling. J. Appl. Remote. Sens. 10(1), 015002–015002 (2016)
 14.
SQ Liu, SH Hu, Y Xiao, YL An, Bayesian shearlet shrinkage for SAR image denoising via sparse representation. Multidim. Syst. Sign. Process. 25(4), 683–701 (2014)
 15.
H Markarian, S Ghofrani, HighTV based CS framework using MAP estimator for SAR image enhancement. IEEE J. Selected Topics Appl. Earth Observations Remote Sensing 10(9), 4059–4073 (2017)
 16.
C Yan, Y Zhang, J Xu, F Dai, J Zhang, Q Dai, F Wu, Efficient parallel framework for HEVC motion estimation on manycore processors. IEEE Trans. Circuits Syst. Video Technol. 24, 2077–2089 (2014)
 17.
C Yan, Y Zhang, J Xu, F Dai, L Li, Q Dai, F Wu, A highly parallel framework for HEVC coding unit partitioning tree decision on manycore processors. IEEE Signal Process. Lett. 21, 573–576 (2014)
 18.
M. C. Motwani, M. C. Gadiya, R. C. Motwani and F. C. Harris, Survey of image denoising techniques, Proceedings of GSPX, pp. 2730, 2004.
 19.
DL Donoho, JM Johnstone, Ideal spatial adaptation by wavelet shrinkage. Biometrika 81(3), 425–455 (1994)
 20.
DL Donoho, IM Johnstone, Adapting to unknown smoothness via wavelet shrinkage. J. Am. Stat. Assoc. 90(432), 1200–1224 (1995)
 21.
HA Chipman, ED Kolaczyk, RE McCulloch, Adaptive Bayesian wavelet shrinkage. J. Am. Stat. Assoc. 92, 1413–1421 (1997)
 22.
Q. Liu and L. Ni, Image denoising using Bayesian shrink threshold based on weighted adaptive directional lifting wavelet transform, International Society for Optics and Photonics in International Conference on Graphic and Image Processing, pp. 8285708285706, 2011.
 23.
IW Selesnick, Bivariate shrinkage functions for waveletbased denoising exploiting interscale dependency. IEEE Trans. Signal Process. 50(11), 2744–2756 (2002)
 24.
Z Dexiang, W Xiaopei, G Qingwei, G Xiaojing, SAR image despeckling via bivariate shrinkage based on contourlet transform. IEEE Int. Symp. Comput. Intell. Design 2, 12–15 (2008)
 25.
Q Guo, S Yu, X Chen, C Liu, W Wei, Shearletbased image denoising using bivariate shrinkage with intraband and opposite orientation dependencies. IEEE Int. Joint Conf. Comput. Sci. Optimization 1, 863–866 (2009)
 26.
S. Chitchian, M. Fiddy and N. M. Fried, Denoising during optical coherence tomography of the prostate nerves via bivariate shrinkage using dualtree complex wavelet transform, International Society for Optics and Photonics in SPIE BiOS, Biomedical Optics, pp. 7161127161124, 2009.
 27.
S Jafari, S Ghofrani, M Sheikhan, Comparing undecimated wavelet, nonsubsampled contourlet and shearlet for SAR images despeckling. Majlesi J. Electr. Eng. 9(3) (2015)
 28.
X. Xu, Y. Zhao, W. Zhou and Y. Peng, SAR image denoising based on alphastable distribution and Bayesian wavelet shrinkage, International Society for Optics and Photonics in Sixth International Symposium on Multispectral Image Processing and Pattern Recognition, pp. 74951U74951U8, 2009.
 29.
W Hongzhi, H Cai, Locally adaptive bivariate shrinkage algorithm for image denoising based on nonsubsampled contourlet transform. IEEE Interational Conf. Comput. Mechatronics Control Electron. Eng. 10(6), 33–36 (2010)
 30.
X Wang, RS Istepanian, YH Song, Microarray image enhancement by denoising using stationary wavelet transform. IEEE Trans. NanoBioscience 2(4), 184–189 (2003)
 31.
D Min, Z Jiuwen, M Yide, Image denoising via bivariate shrinkage function based on a new structure of dual contourlet transform. Signal Process. 109, 25–37 (2015)
 32.
F. Lenzen, “Statistical regularization and denoising,” Dissertation in Mathematics submitted for the degree of Doctor rerum naturalium, Chap. 1 (2006)
 33.
A Hyvärinen, Sparse code shrinkage: denoising of nonGaussian data by maximum likelihood estimation. Neural Comput. 11(7), 1739–1768 (1999)
 34.
DX Zhang, QW Gao, XP Wu, Bayesian based speckle suppression for SAR image using contourlet transform. J. Electron Sci. Technol. China 6(1), 79–82 (2008)
 35.
L Şendur, IW Selesnick, Bivariate shrinkage with local variance estimation. IEEE Signal Process. Lett. 9(12), 438–441 (2002)
 36.
S Xing, Q Xu, D Ma, Speckle denoising based on bivariate shrinkage functions and dualtree complex wavelet transform. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 38, 1–57 (2008)
 37.
S Dahlke, G Kutyniok, P Maass, C Sagiv, HG Stark, G Teschke, The uncertainty principle associated with the continuous shearlet transform. Int. J. Wavelets Multiresolution Inf. Process. 6(02), 157–181 (2008)
 38.
G. Andria, F. Attivissimo, A. M. L. Lanzolla, and M. Savino, A suitable threshold for speckle reduction in ultrasound images, IEEE Transaction on Instrumentation and Measurement, vol. 62, no. 8, Agust 2013.
 39.
Z Wang, AC Bovik, HR Sheikh, EP Simoncelli, Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
 40.
D Brunet, ER Vrscay, Z Wang, On the mathematical properties of the structural similarity index. IEEE Trans. Image Process. 21, 1488–1499 (2012)
 41.
J Zhang, TM Le, S Ong, TQ Nguyen, Noreference image quality assessment using structural activity. Signal Process. 91(11), 2575–2588 (2011)
 42.
M Kiani, S Ghofrani, Two new methods based on contourlet transform for despeckling synthetic aperture radar images. J. Appl. Remote. Sens. 8(1), 083604–083604 (2014)
 43.
XY Wang, YC Liu, HY Yang, Image denoising in extended shearlet domain using hidden Markov tree models. Digit. Signal Process. 30, 101–113 (2014)
 44.
TM Cover, JA Thomas, Elements of Information Theory (John Wiley & Sons, 2012)
 45.
GS Shin, MG Kang, Waveletbased denoising considering interscale and intrascale dependences. Opt. Eng. 44, 067002–067009 (2005)
 46.
IG Cumming, FH Wong, Digital processing of synthetic aperture radar data. Artech house 1, 3 (2005)
 47.
Armin W. Doerry Automatic compensation of antenna beam rolloff in SAR images, Department of Energy in United States, 2006.
 48.
L Gomez, ME Buemi, JC JacoboBerlles, ME Mejail, A new image quality index for objectively evaluating despeckling filtering in SAR images. IEEE J. Selected Topics Appl. Earth Observations Remote Sensing 9, 1297–1307 (2016)
Acknowledgements
The authors would like to thank S. Jafari [13] for providing MatLab codes in order to implement GΓD ‐ NSST and NIG ‐ NSST algorithms, and also H. Markarian [15] for providing HighTV codes. Further, we thank Sandia National Laboratories [47] for providing the true SAR image.
Funding
Not applicable
Availability of data and materials
The three images (Farmland, Peninsula, and Shipping Terminal) [46] are part of the SAR images collected by RADARSAT1 in the Fine Beam 2 mode on June 16, 2002. Most of the illuminated scenes were in Delta, British Columbia, Canada. The radar was operating in the Cband with HH polarization. The mini SAR image Aircraft provided by Sandia National Laboratories [47] was collected from the Kirtland AFB region on August 27, 2007 in the Kaband and Kuband.
Author information
Affiliations
Contributions
We confirm that this work is original and has not been published nor is it currently under consideration for publication elsewhere. Both authors read and approved the final manuscript.
Corresponding author
Correspondence to Sedigheh Ghofrani.
Ethics declarations
Authors’ information
Nikou Farhangi was born in Macoo, Iran, in 1987. She received the B.Sc. degree in telecommunication engineering from Islamic Azad University, Urmiya branch in 2010 and the M.Sc. degree in communication systems from Islamic Azad University, South Tehran Branch, Iran, in 2017, respectively. Her research interests include digital signal and image processing.
Sedigheh Ghofrani was born in 1968 in Ghochan. She received the B.S. degree in electronic engineering from Tehran University, Iran, in 1991, the M.S. degree in communication from Islamic Azad University, South Tehran Branch, Iran, in 1997, and the Ph.D. degree in electronic from Iran University of Science and Technology, Tehran, Iran, in 2004. She has been an Assistant Professor in the Department of Electronic and Electrical Engineering, Islamic Azad University, South Tehran Branch from 2004 to 2011 and an Associate Professor since 2012. In 2003, she spent 8 months at the School of Electronic and Electrical Engineering, the University of Leeds, UK, supported by the British Council foundation. In 2012, she spent 8 months at the Center for Advanced Communications (CAC) at Villanova University, PA, USA, as Visiting Research Professor. Her area of research includes image processing and signal processing.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Received
Accepted
Published
DOI
Keywords
 BayesShrink
 BiShrink
 Weighted BayesShrink
 Weighted BiShrink
 Nonsubsampled shearlet transform
 Stationary wavelet transform
 SAR images despeckling