 Research Article
 Open access
 Published:
A New Denoising System for SONAR Images
EURASIP Journal on Image and Video Processing volumeÂ 2009, ArticleÂ number:Â 173841 (2009)
Abstract
The SONAR images are perturbed by speckle noise. The use of speckle reduction filters is necessary to optimize the image exploitation procedures. This paper presents a new denoising method in the wavelet domain, which tends to reduce the speckle, preserving the structural features and textural information of the scene. Shiftinvariance associated with good directional selectivity is important for the use of a wavelet transform (WT) in many fields of image processing. Generally, complex wavelet transforms, for example, the Double Tree Complex Wavelet Transform (DTCWT) have these useful properties. In this paper, we propose the use of the DTCWT in association with Maximum A Posteriori (MAP) filters. Such systems carry out different quality denoising in regions with different homogeneity degree. We propose a solution for the reduction of this unwanted effect based on diversity enhancement. The corresponding denoising algorithm is simple and fast. Some simulation results prove the performance obtained.
1. Introduction
The SONAR images represent a particular case of Synthetic Aperture Radar (SAR) images. The SAR images are perturbed by speckle. It is of multiplicative nature. The aim of a denoising algorithm is to reduce the noise level, while preserving the image features. A first particularity of SONAR images is their potentially low quality. Depending on the acquisition conditions, the Signal to Noise Ratio (SNR) can be very low. A second feature of the SONAR images is that they contain almost homogeneous and textured regions. The presence of edges is relatively rare. The multiplicative speckle noise that disturbs the SONAR images can be transformed into an additive noise with the aid of a logarithm computation block. To obtain the denoising result, the logarithm inversion is performed at the end of the process. A potential architecture for a SONAR denoising system is presented in Figure 1. The denoising system must contain a mean correction block. The corresponding block in Figure 1 computes the mean of the acquired image which is equal with the mean of its noisefree component because the speckle noise has unitary mean. Next it corrects the mean of the result. The mean of the image at the output of the block that inverts the logarithm is extracted and the mean of the acquired image is added.
The first goal of this paper is the additive noise denoising kernel in Figure 1.
The multiresolution analysis performed by the WT is a powerful image denoising tool. In the wavelet domain, the additive noise is uniformly spread throughout the coefficients, while most of the image information is concentrated in the few largest ones (sparsity of the wavelet representation). The most straightforward way of distinguishing information from noise in the wavelet domain consists of thresholding the wavelet coefficients. Softthresholding is the most popular strategy and has been theoretically justified by Donoho and Johnstone [1]. They propose a three steps denoising algorithm:

(1)the
computation of a forward WT,

(2)the
filtering with a nonlinear filter,

(3)the
computation of the corresponding inverse wavelet transform (IWT).
They use the Discrete Wavelet Transform (DWT) and the softthresholding filter. They do not make any explicit hypothesis on the noisefree image. So, this method can be considered nonparametric. Their unique statistical hypothesis refers to the noise, considered additive white and Gaussian, (AWGN). The softthresholding filter is used to put to zero all the wavelet coefficients with the absolute value smaller than a threshold. This threshold is selected to minimize the minmax approximation error.
The softthresholding filter was enhanced in [2], where the same hypotheses concerning the noisefree image and the noise were considered.
Some relatively recent research has addressed the development of statistical models of wavelet coefficients of natural images and application of these models to image denoising [3â€“9]. The corresponding signal processing treatments can be considered as parametric denoising methods. An appealing particularity of the WTs is the interscale dependence. If at a given scale a coefficient is large, its correspondent at the next scale (having the same spatial coordinates) will be also large. The wavelet coefficients statistical models which exploit the dependence between coefficients give better results compared to the ones using an independent assumption [5, 6, 8, 9]. The denoising is performed in [5, 6] with the aid of maximum a posteriori (MAP) filters.
If we use the denotation for the wavelet coefficients of the noisefree image and for the wavelet coefficients of the noise then it can be written, The MAP estimation of , , realized using the observation is given by the following MAP filter equation:
where represents the probability density function (pdf) of . Generally, the above equation has no analytical solution. There are some exceptions. For example if both and are zero mean Gaussian distributed, with variances and then the MAP filter becomes the very well known zeroorder Wiener filter [10]. If the coordinates of the current pixel are (), then the inputoutput relation of the local zeroorder Wiener filter is
The model of natural images is given by heavy tailed distributions, [5]. So, the utilization of zeroorder Wiener filters in image denoising applications cannot provide the best performance. This drawback can be partially compensated by a better estimation of the local variance of the noisefree image, realized with the aid of a twostage denoising system. The first stage treats the acquired image providing a pilot for the second stage [10â€“13]. A very nice contribution of [12] is the idea of directional windows. The rectangular windows used for the estimation of the local variance of the clean image are replaced by elliptical windows oriented following the preferential direction of the current detail subimage. This is a first example of exploiting the intrascale dependence of the wavelet coefficients, mentioned in [2]. Of course the intrascale dependence of the wavelet coefficients is an intrinsic property of a WT, but it can be exploited only if there is a model at hand to describe it.
If is Gaussian distributed and has a Laplacian distribution (this is a heavy tailed one), then the MAP filter becomes an adaptive soft thresholding filter [5]. The local variant of this MAP filter is described by
where:
The zeroorder Wiener filter and the adaptive softthresholding filter are two examples of marginal MAP filters. If the models of the clean image and of the noise are bivariate distributions then the MAP filter can take into account the interscale dependence of the wavelet coefficients. This is the case of the bishrink filter [5]. In this case the coefficient represents the parent of the coefficient ( is the wavelet coefficient at the same position as , but at the next coarser scale). Then, , and the vectors ,, and can be constructed. We can write: . The noise is assumed i.i.d. Gaussian:
The model of the noisefree image proposed in [5] is
another heavy tailed distribution. The inputoutput relation of the bishrink filter is
This estimator requires prior knowledge of the noise variance and of the marginal variance of the clean image for each wavelet coefficient. To estimate the noise variance from the noisy wavelet coefficients, a robust median estimator from the finest scale wavelet coefficients is used [1]:
In [5] the marginal variance of the th coefficient is estimated using neighboring coefficients in the region , a squared shaped window centered on this coefficient with size . To make this estimation one gets where represents the marginal variance of noisy observations and . For the estimation of the marginal variance of noisy observations, in [5] the following relation is proposed:
where is the size of the neighborhood . Then can be estimated as:
In [6], a similar technique is used, but the bivariate a priori pdf of the clean image of type is considered. Unfortunately in this case the MAP filter equation can not be solved analytically, some numerical methods being required. The advantage of an analytical solution of the MAP filter equation lies in a fast implementation (the numerical methods are avoided) and in the possibility to perform a sensitivity analysis.
The DWT has some drawbacks [7]: the lack of shift invariance and the poor directional selectivity. These disadvantages can be diminished using a complex wavelet transform, like, for example, the DTCWT [7]. The MAP filters constructed in [5, 6] act in the field of the DTCWT.
In [8] a method for removing noise from digital images is described, based on a statistical model of the coefficients of overcomplete multiscale oriented basis. This decomposition is named steerable pyramid. Following it, the image is subdivided into subbands using filters that are polarseparable in the Fourier domain. In scale, the subbands have octave bandwidth with a functional form constrained by a recursive system diagram. In orientation, the functional form is chosen so that the set of filters at a given scale span a rotationinvariant subspace. This decomposition can be considered as a WT with shift invariance and very good directional selectivity. Neighborhoods of coefficients at adjacent positions (intrascale dependence) and scales (interscale dependence) are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. The latter modulates the local variance of the coefficients in the neighborhood, and is thus able to account for the empirically observed correlation between the coefficients amplitudes. Under this model, named Gaussian scale mixture (GSM), the Bayesian least squares estimate (BLS) of each coefficient reduces to a weighted average of the local linear estimate over all possible values of the hidden multiplier variable.
In [9], three novel wavelet domain denoising methods for subbandadaptive, spatiallyadaptive and multivalued image denoising are developed. The core of this approach is the estimation of the probability that a given coefficient contains a significant noisefree component, which is called signal of interest. The WTs used in [9] are the DWT and the UDWT.
The aim of this paper is to correct the behavior of the bishrink filter in the homogeneous regions of very noisy images.
All the denoising methods already presented have some drawbacks. A potential solution to correct these behaviors is to fuse multiple denoising schemes. Using different denoising schemes, we can consider the results as different estimates of the image. Different schemes show dissimilar types of artifacts. Through linear combination of the results, in [14] the norm of the error to find the optimum coefficients in a leastsquareerror sense is minimized. The wavelet transform, the contourlet transform, and the adaptive 2D Wiener filtering are used in [14] as blocks of denoising schemes. Averaging of the results is also proposed as a special case of linear combination in [14] and show that it yields nearoptimal performance. Its major disadvantage is the oversmoothing. This fusion technique can be improved if the mean computation is delocalized. Nonlocal (NL) means algorithms are proposed in [15]. The NLmeans algorithm tries to take advantage of the high degree of redundancy of any natural image. Every small window in a natural image has many similar windows in the same image. In a very general sense, one can define as "neighborhood of a pixel " any set of pixels in the image so that a window around looks like a window around . All pixels in that neighborhood can be used for predicting the value at . Given a discrete noisy image the estimated value is computed as a weighted average of all the pixels in the image, , where the weights depend on the similarity between the pixels and and satisfy the usual conditions and In order to compute the similarity between the image pixels, a neighborhood system on I is defined. While producing stateoftheart denoising results, this method is computationally impractical [16]. Its high computational complexity is due to the cost of weights calculation for all pixels in the image during the process of denoising. For every pixel being processed, the whole image is searched, and differences between corresponding neighborhoods are computed. The complexity is then quadratic in the number of image pixels. In [16] the computational complexity of the algorithm proposed in [15] is addressed in a different fashion. The basic idea proposed in [16] is to preclassify the image blocks according to fundamental characteristics, such as their average gray values and gradient orientation. This is performed in a first path in [16], and while denoising in the second path, only blocks with similar characteristics are used to compute the weights. Accessing these blocks can be efficiently implemented with simple lookup tables. The basic idea is then to combine ideas from [15], namely, weighted average based on neighborhoods similarity, with concepts that are classical in information theory and were introduced in image denoising context. Images with much finer texture and details will not benefit that much from the denoising; while reducing most of the noise, this type of processing will inevitably degrade important image features [17]. The first problem is to distinguish between good and bad candidates for denoising. Many natural images exhibit a mosaic of piecewise smooth and texture patches. This type of image structure calls for position (spatial) varying filtering operation. Textured regions are characterized by high local variance. In order to preserve the detailed structure of such regions, the level of filtering should be reduced over these regions. The basic concept amounts to a reduction in the extent of filtering in regions where signal power exceeds that of the noise. So, the solution proposed in [17] supposes the anisotropic treatment of the acquired image taking into account the local variance values of its regions. This procedure can be seen like an NLmeans algorithm where the classification of the image blocks is done on the basis of their local variance. The algorithm proposed in this paper is of the same kind but it has two stages architecture. The first stage performs a first denoising followed by a classification of the denoised image blocks based on their local variances. The second stage fuses multiple denoising schemes. The fusion is done with the aid of the classes provided by the first stage with an NLmeans like algorithm. But this algorithm does not operate on a single image. It operates on the set of results of all the denoising schemes to be fused. So in our case the delocalization is realized by the diversification of the denoising schemes. We average some pixels having the same coordinates in different results of different denoising schemes. The running time of the algorithm proposed in [16] is linear in the number of image pixels. The algorithm in [17] is also fast. The algorithm proposed in this paper is even faster. Our algorithm is explained as follows.
First we prove that the performance of the bishrink filter degrades with the increasing of and with the decreasing of . Next we propose a new strategy for the correction of those degradations. It is based on architecture in twostage. In the first stage a variant of the genuine denoising algorithm proposed in [5] is applied obtaining a first result. Computing the standard deviation of each pixel of the first result, the pilot image is obtained. Its pixels are classified in regions according to their values. This is equivalent with the image blocks preclassification proposed in [16]. Our classification criterion is the same as that proposed in [17].
The set of coordinates of the pixels belonging to one of these regions will represent one of the masks used in the second stage. At the basis of the construction of the second stage lies the idea of diversification. Using different diversification mechanisms, denoising schemes are obtained. Their output images are called partial results. These results are synthesized with the aid of the masks generated at the end of the first stage. The synthesis is achieved by NLaveraging.
The structure of this paper is the following. In Section 2 a sensitivity analysis of the bishrink filter is presented and some of its drawbacks are identified. Then a solution to reduce these drawbacks is proposed and analyzed. In Section 3 all the details of the proposed denoising algorithm are given. Section 4 is dedicated to the presentation of the simulation results and to some comparisons with the best available wavelet based image denoising results conceived to illustrate the effectiveness of the proposed algorithm. The paper's conclusion is formulated in Section 5.
2. The Bishrink Filter
The estimator described by (5)â€“(10) is named bishrink filter and is applied in the field of the DTCWT. The sensitivity of the bishrink filter with the estimation of the noise standard deviation () can be computed with the relation:
The inputoutput relation of the bishrink filter (7) can be put in the following form:
So, it can be written:
The absolute value of this sensitivity is an increasing function of . When the value of the estimation of the noise standard deviation is higher than the performance of the bishrink filter is poorer.
Another very important parameter of the bishrink filter is the local estimation of the marginal variance of the noisefree image . The sensitivity of the estimation with is given by
This is a decreasing function of . The precision of the estimation based on the use of the bishrink filter decreases with the decreasing of . The local variance of a pixel can be interpreted in two ways. First it represents a homogeneity degree measure for the region to which the considered pixel belongs. This behavior can be observed in Figure 2, where the Barbara image and the image composed by the local variances of its pixels are presented together.
The regions with high homogeneity in the Barbara image correspond to the dark regions in the image of local variances. All the pixels belonging to a perfect homogeneous region have the same value. So, their local variances are equal with zero. The values of the pixels belonging to a textured region oscillate in space and they have not null local variances. Finally, the pixels belonging to an edge have the higher local variances. So, the bishrink filter treats the edges very well, the estimation of the textured regions must be corrected and the worst treatment corresponds to the homogeneous regions. The denoising quality of pixels with slightly different will be very different in the homogeneous regions. The sensitivity increases with the increasing of . So, the degradation of the homogeneous and textured zones of the noisefree image is amplified by the increasing of the noise standard deviation. Consequently, the most difficult regime of the bishrink filter corresponds to the treatment of homogeneous regions of very noisy images.
Similar sensitivity analyses can be accomplished for the zeroorder Wiener filter or for the adaptive softthresholding filter, concluding that their worst behavior corresponds to the homogeneous regions of their noisefree input image component. Secondly, the local variance of a pixel gives some information about the frequency content of the region to which the considered pixel belongs. If the pixels of a given region have low local variances, then the considered region contains low frequencies. If these pixels have high local variances then the considered region contains high frequencies. The aim of this paper is to reduce the distortion produced by a denoising system based on the DTCWT and the bishrink filter as a consequence of the sensitivities and .
3. The Solution Proposed
The denoising quality of pixels with slightly different local variances is different. This difference is higher when the corresponding values of the local variances are smaller. An example can be observed in the first picture of Figure 3.
This picture represents a homogeneous region of the Lena image affected by AWGN with , denoised with the method proposed in [5]. Some visible artifacts can be observed.
Our goal is to make the denoising more uniform. First we make the data more uniform with respect to the values of the local variances of the noisefree component of the acquired image. At the beginning of our first stage, the denoising method proposed in [5] is applied obtaining a first result. By segmenting the first result according to the values of its local variances we obtain classes. Each of them contains estimations of the noisefree component of the acquired image with local variances included into a specified interval. The range of the local variances in the interior of each class is relatively small. So the data belonging to a considered class is uniform. A segmentation example for the Barbara image can be observed in Figure 2, corresponding to . The coordinates of the pixels belonging to a class are registered into a correspondent mask. These masks represent the result of the first stage of the proposed denoising method. The classes are obtained segmenting the first result, which represents an estimation of the noisefree component of the acquired image. The second stage of the proposed denoising method treats once again the acquired image. This time we suppose that the local variances of the noisefree part of the acquired image are identical with the local variances of the first result. With the aid of the masks, the classes of the first result can be imposed to the noisefree component of the acquired image. This transfer procedure creates some uncertainty. So we need to make the harmonization of the denoising uncertainty among the classes of the noisefree component of the acquired image. The uncertainty of the denoising must be as small as possible. At the beginning of the second stage, it is inversely proportional with the mean value of the local variance which corresponds to that class. This stage can be implemented by a diversification mechanism followed by a fusion mechanism. The diversification mechanisms produce partial results for each class of the acquired image. The idea of diversification, which lies at the basis of the construction of the second stage of the proposed denoising architecture, comes from the communications field where spatial or temporal diversification techniques are used to add a fixed amount of redundancy to a message, improving the information transmission. Finally, the extra data are rejected using a fusion procedure and the message is reconstructed in a form as close as possible to its original one. The diversification principle was already used in denoising. For example, to reduce the unwanted oscillations near edges, which appear because the DWT is not shiftinvariant, Coifman and Donoho introduced the cyclespinning concept, [13]. Rotation invariance can be also obtained using the diversification principle, [18]. This concept was also used in [19, 20] to improve the denoising. The strategy proposed in [14] can also be considered as diversification. In this paper three diversification mechanisms are proposed. The first one supposes the utilization of two different mother wavelets. The others are based on the utilization of two different variants of bishrink filter; they are adaptive bishrink filter with global estimation of local variance and mixed bishrink filter. Using these diversification mechanisms and the genuine bishrink filter, partial results are obtained. The correspondent class of the final result can be obtained by the fusion of the same classes of the partial results. The simpler linear fusion technique for the partial results is their averaging. This method was already used in denoising applications [13, 14, 18, 19]. In Figure 4, the fusion system applied in the interior of a specified class is presented. An averager is a linear lowpass filter. Its cutoff frequency is inversely proportional with the number of partial results.
The frequency content of a class corresponding to a higher value of local variance is richer than the frequency content of a class that corresponds to a smaller value of local variance. So, for the fusion of a class corresponding to a smaller value of local variance, an increased number of partial results are necessary. The fusion procedure uses a different number of partial results, from class to class, because these classes have different uncertainties. It is based on an NLmeans like algorithm.
Using the masks generated at the end of the first stage, we identify the corresponding classes in each partial result. Each one contains only the pixels with the coordinates specified by the corresponding mask. The amount of noise reduction and the oversmoothing degree in the interior of a class increase with the increasing of the number of partial results used. The fusion procedure proposed prevents the oversmoothing using a different number of partial results in regions with different local variances of the noisefree component.
Our previous simulations suggest a value of six for . So, we have six classes and six denoising systems in Figure 4 (). For the first class we have only one weight ( and ). For the second class we have two weights ( and ) and so on. For the sixth class, all the weights of the system in Figure 4 are not nulls ().
Other fusion techniques, like median filtering or maximum's detection can be also imagined. A very interesting fusion technique, based on the use of the multiwavelet DWT, is proposed in [21].
Any of the three variants of the bishrink filter proposed in this paper has better performance than the local zeroorder Wiener filter. The DTCWT is superior to the DWT or the UDWT in denoising applications. So, the performance obtained using the proposed denoising method is superior to the performance reported in [11] or in [12].
3.1. The Proposed Implementation
The architecture of the proposed denoising kernel is presented in Figure 5. The first stage of the algorithm is represented in red. It is composed by four blocks. The first three blocks implement the genuine denoising method based on the use of the bishrink filter with global estimation of local variance (). Our previous simulations indicate that is the better variant of bishrink filter from the PSNR's enhancement point of view.
The first block of the first stage implements a DTCWT and the third one the corresponding inverse transform (IDTCWT). So, a first result is obtained. The pilot image is generated by the segmentation of done by the block Segm. The segmentation is realized by the comparison of the local standard deviation of each pixel of the first result with some thresholds. This way the data contained in each class is uniform. Registering the coordinates of the pixels belonging to each class, six masks are generated.
The second stage of the denoising system in Figure 5 is represented in blue. To realize the diversification required in the second stage of the proposed algorithm, two types of WT, DTCWT_A and DTCWT_B, are computed, obtaining the wavelet coefficients and . Next, three variants of bishrink filter: the genuine one, the adaptive bishrink filter with global estimation of the local variance, and the mixed bishrink filter, are applied in the field of each DTCWT. Six sequences of estimations of the wavelet coefficients: , , , , , and are obtained. For each one the inverse WT, IDTCWT, is computed, obtaining six partial results: , , , , , and . This way the redundancy was increased because the actual volume of data is six times higher than the initial volume of data.
With the aid of the six masks generated at the end of the first stage, the six classes of each partial result are identified. Using the class selectors CS_{1}â€“CS_{6}, the partial results are individually treated. Each mask is used by the corresponding class selector. These systems select the pixels of their input image with the coordinates belonging to the correspondent mask. CS_{1} is associated with the class which contains the higher values of the local standard deviation and treats the image . It generates the first class of the final result and contributes to the generation of the classes of the final result. CS_{2} corresponds to the next class of and treats the image , participating to the construction of the classes of the final result. CS_{3} corresponds to the next class of and treats the image . It contributes to the construction of the classes of the final result and so on. Finally CS_{6} is associated to the remaining class of and treats the image . It participates to the construction of the sixth class of the final result (that contains the smaller values of the local variance). By NLaveraging (an NLmeans like methodology), the six classes of the final result are obtained. The first class of the final result, , is identical with the first class of the image and represents the output of CS_{1}. The second class of the final result, , is obtained averaging the pixels of the outputs of CS_{1} and CS_{2} and so on. For the last class of the final result , containing soft textures and homogeneous zones, all the pixels belonging to the outputs of CS_{1}, CS_{2},â€¦, and CS_{6} are averaged. Assembling these classes by concatenation, the final estimation is obtained. In the following, the construction of each block in Figure 5 is presented in detail.
3.2. The Diversification Mechanisms
The first diversification mechanism refers to the construction of the DTCWT. Since an image usually consists of several regions of different smoothness, the sparsity of its representation in a single wavelet domain is limited. This naturally motivates using multiple wavelet transforms to denoise. This procedure is used, for example, in [20]. Besov balls are convex sets of images whose Besov norms are bounded from above by their radii. Projecting an image onto a Besov ball of proper radius corresponds to a type of wavelet shrinkage for image denoising. By defining Besov balls in multiple wavelet domains and projecting onto their intersection using the projection onto convex sets (POCSs) algorithm, an estimate is obtained in [20], which effectively combine estimates from multiple wavelet domains.
There are two kinds of filters used for the computation of the DTCWT: for the first decomposition level and for the other levels [7]. The first diversification mechanism is realized through the selection of two types of filters for the first level. The first one is selected from the (9,7)tap Antonini filters pair and the second one corresponds to the pair of Farras nearly symmetric filters for orthogonal 2channel perfect reconstruction filter bank, [22]. The idea of diversification by using multiple mother wavelets was also exploited in [19, 21], where the bishrink filter was associated with DWT. The same WT was used in [20]. The synthesis of the final result was carried out in [19] by simple averaging and in [20, 21] by variational frameworks.
The other two diversification mechanisms refer to the construction of the bishrink filter. is the genuine bishrink filter. The filter is a bishrink filter with global estimation of the local variance [23]. It was constructed for the reasons presented in the following. The estimation in (9) is not very precise. First, it is based on the correct assumption that and are modeled as zero mean random variables. But their restrictions to the finiteneighborhood are not necessarily zero mean random variables. So, it is better to estimate first the means in the neighborhood:
and then the variances:
Finally, the relation (10) can be applied. In the case of the bishrink filter with global estimation of the local variance, the detail wavelet coefficients produced by the first tree of the DTCWT computation block are indexed with and the detail wavelet coefficients produced by the other tree are indexed with . Applying in order the relations (15), (16), and (10) for the two trees implementing each of the DTCWTs, the local parameters: , , , , and are computed in each neighborhood . Then the global estimation of the marginal standard deviation can be done:
Using this estimation, the bishrink filter with global estimation of the local variance is applied separately to the real detail wavelet coefficients produced by each of the two trees composing the DTCWT. In Figure 6 a comparison of the bishrink filter with the bishrink filter with global estimation of the local variance is presented, for the Lena image perturbed by AWGN with zero mean and standard deviation .
It can be observed that the bishrink filter with global estimation of the local variance conserves better the extreme values of the clean component of the input image.
The filter F3 is the mixed bishrink filter, proposed in [19]. After three iterations of each DWT representing one tree of a DTCWT, the pdf of wavelet coefficients can be considered Gaussian. The mixed bishrink filter acts for the first three iterations of each DWT as a bishrink filter with global estimation of local variance, for the forth iteration it acts as a local adaptive Wiener filter and for the fifth iteration (the last one) it acts as a hard thresholding filter, [1], with the threshold equal with 3 .
The effect of the filter F3 in the framework proposed in [19] can be observed in Figure 7, (middle). Preliminary extensive tests proved that the six estimates in Figure 5 are classified from better to poor in the following sequence: , , , , , and from the peak signal to noise ratio (PSNR) point of view.
3.3. The Classification
The image is segmented in classes whose elements have a value of the local variance, belonging to one of six possible intervals, , where and . An example is presented in Figure 2. The Barbara image perturbed by AWGN with was denoised obtaining the partial result . The six classes of this partial result are represented too.
The class selector CS in Figure 5 selects the class associated to the interval .
The preliminary tests already mentioned also suggest the following values for the bounds of the intervals : , and
4. Simulation Results
We present three types of simulation results: for AWGN, for synthesized speckle noise, and for real SONAR images.
4.1. AWGN
Firstly we compare the proposed additive noise denoising kernel to other effective systems in the literature, namely, the interscale orthonormal wavelet thresholding denoising system proposed in [2], the multiwavelet approach from [20], the genuine bishrink filter proposed in [5], the processor based on the SS family of distributions presented in [6], the BLSGSM system proposed in [8], and the denoising system based on the estimation of the probability of the presence of a signal of interest proposed in [9]. The comparison was done using four images: Peppers, Lena, Boat and Barbara, all having the same size pixels.
First, we compared the performance in terms of output PSNRs. Next, we analyzed the visual aspect of the results. Let and denote the clean and the denoised images. The root mean square (rms) of the approximation error is computed by
where is the number of pixels. The PSNR in dB is given by
The PSNR values obtained using the denoising systems already mentioned at the beginning of this section are tabulated in Table 1 and are taken from [2, 5, 6, 8, 9, 20].
Analyzing this table, some observations can be made. For all the test images and all noise levels, with only one exception (Barbara, ) the better results are obtained using the BLSGSM algorithm. The PSNR enhancement realized through the proposed algorithm follows closely the performance of the BLSGSM algorithm. There are two implementations of the algorithm proposed in [6]. The first one, which does not make a local estimation, was considered for the treatment of the Lena image in Table 1. The second implementation makes a local estimation and has better performance. It was considered for the treatment of the image Boat in Table 1. The use of SS family requires numerical methods to solve the MAP filter equation and the solution obtained has not an analytical form. So, a sensitivity analysis is not possible for the processor in [6]. Unfortunately, neither the method described in [6] nor the proposed denoising method exploits the intrascale dependence of the wavelet coefficients. The advantage of the system proposed in [9] is the consideration of the intrascale dependence of the wavelet coefficients. Its disadvantage lies in the utilization of the UDWT, which is perfectly shift invariant but has a poor directional selectivity. It is also very redundant. The comparisons already presented take into account only the PSNR, which is a global quality measure. In the following, we will present some considerations about the visual quality of the results. First, a comparison of the denoising system based on the genuine bishrink filter described in [5] and the proposed denoising algorithm from the homogeneous zones treatment point of view is reported. An example for the image Lena is given in Figure 3. The clean image was perturbed by AWGN with , obtaining a very noisy image. A region obtained cropping the image is illustrated on the first line of Figure 3. The same region was extracted from the image and is illustrated on the second line of Figure 3. The proposed method decreases the distortions introduced by the denoising system based on the genuine bishrink filter [5] especially in the case of very noisy images. An objective measure of the homogeneity degree of a region is defined by the ratio of the square of the mean and the variance of the pixels situated in the considered region. In the following, this measure will be denoted by . In Table 2 we present the enhancements of Ra for the proposed denoising method and for the method in [5] for the Lena image. We have selected a zone located in the leftup corner. The degree of homogeneity of these regions (expressed by ) differs from noise level to noise level.
In each experiment, the enhancement of realized through the proposed denoising system is higher than the enhancement of Ra realized through the denoising system in [5]. So our goal to accomplish a better treatment of the homogeneous regions is objectively verified.
To continue the visual quality analysis we have imagined the following procedure. First, the edges of the clean image are detected using the Roberts detector. Next the edges of the denoising result are detected using the same detector with the same parameters. Next the rms of the difference of the two edge images is computed and its dependence on the input PSNR is sketched. In Figure 8 we represent the results of the comparisons made on the basis of the procedure already proposed between the proposed denoising system and the system in [5] for the case of images: Peppers, Lena, Boat and Barbara. The edges treatment realized through the proposed denoising system is better than the edges treatment realized through the system based on the genuine bishrink filter [5].
It is interesting to evaluate the various denoising methods from a practical point of view: the computation time. With the simple univariate method proposed in [2], the whole denoising process (including four iterations of an orthonormal wavelet transform) lasts approximately 1.6 seconds using a Power Mac G5 workstation with 1.8GHz PowerPC 970 CPU. With the interscaledependent thresholding function proposed in [2] the whole denoising task takes about 2.7 seconds.
In the same paper the computation times for some of the other denoising methods presented in Table 1 are appreciated. For example for the redundant BLSGSM with estimation window of size the computation time for denoising images is appreciated at 311.8 seconds. The computation time of the ProbShrink algorithm described in [9], for estimation window with size is appreciated in [2] at 6.6 seconds. In [5] is noted that the All Cops implementation of the denoising algorithm based on the association of the DTCWT with the genuine bishrink filter takes 25 seconds on a 450â€‰MHz Pentium II. The All Cops program for the proposed algorithm takes 41 seconds on a 2.4â€‰GHz Pentium IV. So, the classifications established following the computation speed criterion and the output PSNR criterion for the denoising methods analyzed on the basis of Table 1 are inverse. All the previous considerations were made for images of size .
4.2. Speckle Synthesized
In this case the noise is generated following a Rayleigh distribution with unitary mean and is of a multiplicative nature. It is generated computing the square root of a sum of squares of two white Gaussian noises having the same variance. For the Lena image, applying the denoising system in Figure 1 we have obtained the result presented in Figure 9. The PSNR gain performed by the proposed method is in this case of 10â€‰dB.
A comparison of the proposed method with the classical speckle removing methods proposed by Lee, Frost, and Kuan and with the wavelets based method from [19, 21], for the example in Figure 9, is presented in Table 3. In the case of the classical speckle removing methods, rectangular estimation windows with size were used.
From the PSNR point of view our method has the best performance among those compared in Table 3.
The proposed method can be considered equivalent with the SAR denoising method proposed in [24]. The two denoising algorithms proposed in [24] use the UDWT. It is computed either with the aid of the Daubechies mother wavelets db8 or with the pair of biorthogonal mother wavelets bior9.7. The first denoising algorithm proposed in [24] performs a local linear minimum mean square error (LLMMSE) filtering in the UDWT domain. The second one uses a MAP filter constructed supposing that the noisefree wavelet coefficients and the wavelet coefficients of the noise are distributed according to Generalized Gaussian Distributions. The parameters of those pdfs are estimated for each pixel of the input image. The corresponding MAP filter equation is solved with the aid of numerical methods. A comparison of the proposed denoising method with the methods proposed in [24] is presented in Table 4. An excellent criterion for the appreciation of the quality of a denoising method conceived for the reduction of the multiplicative noise is based on the computation of the method noise. It represents the ratio of the noisy image by the denoising result [25]. The method noise must be identical with the input noise for a perfect denoising method. It can be observed, analyzing Figure 9, that the input noise (represented in the second picture from the first line) has the same aspect like the method noise (represented in the second picture from the second line). There are some fine differences between the images of the input noise and of the method noise, noticeable especially in the dark regions of the noisefree component of the input image (represented in the first picture of the first line in Figure 9). In the last picture from the second line, a comparison between the histograms of the input noise (up) and of the method noise (bottom) is carriedout, highlighting the statistical differences between these two noises. They are distributed following the same type of law (a Rayleigh law), but the input noise has a higher variance. It means that the contrast of the noisefree input image is affected by the proposed denoising method.
It can be observed that the proposed method makes a good treatment of edges and of homogeneous regions. Its drawback is the textures treatment, some of the fine textures of the clean component of the acquired image being erased by the denoising. A better analysis of the visual aspect of the proposed method can be carried out if it is applied to the test image proposed in [26], which consist of a collection of six synthetic and real subimages. The subimages 1, 5, and 6 represent real scenes. The subimage 2 contains three types of synthesized textures and the subimage 4 contains some synthesized homogeneous regions and contours. The subimage 3 represents a zone of the Lena image. In this case the speckle noise is synthesized following the procedure presented in [26]. The methods from [21, 26] are compared with the proposed denoising method.
A comparison of denoising methods is presented in Figure 10, based on the subimages: 2, 3, and 4. The better treatment of the homogeneous regions is performed by the pure statistical method in [26] but it erases some contours and textures carryingout an oversmooth filtering. The other two methods use the DTCWT and treat the details better. The method proposed in [21], based on the DTCWTgenuine bishrink filter denoising association does not eliminate all the noise. This effect is easy visible in the homogeneous regions. The proposed denoising method makes a good treatment of real scenes, completely eliminating the noise and introducing small distortions. The treatment of edges is excellent. The denoising of rough textures is more accurate. Some distortions are visible at the borders of homogeneous zones. In the interior of those regions residual noise can be observed.
4.3. Real SONAR Images
Figure 7 shows the original SONAR image "Swansea" and the results obtained with the method in [19] and the proposed method. The visual analysis of the filtered image proves the correctness of our assumptions. Indeed the result of the proposed method has a better visual aspect, the result in [19] being slightly oversmoothed. An objective measure of the homogeneity degree of a region was proposed for SAR images and it is named enhancement of the Equivalent Number of Looks (ENLs). It is defined by the ratio of the square of the mean and the variance of the pixels situated in the considered region. The enhancement of the ENL of a denoising method in a homogeneous region is defined by the ratio of the ENLs of the considered region computed before and after the application of the method.
The performance obtained for homogeneous regions through the proposed denoising method is certificated by the important enhancement of the ENL obtained considering a region of pixels.
5. Conclusions
This paper presents an effective image denoising algorithm for SONAR images. It is based on a new additive noise denoising kernel that improves the treatment of homogeneous zones for very noisy images. Inspired from [5], our algorithm uses one of the best WTs and a very good MAP filter. It outperforms the visual aspect and the PSNR enhancement performances of other denoising methods. The diversity enhancement technique proposed, based on the segmentation of the local standard deviation image obtained at the output of the first stage, is more general. It can be applied with minor modifications to other denoising architectures. This is an adhoc method and we have not all the rigorous theoretical justification yet.
We presented our simulation results and compared them with published results in order to illustrate the effectiveness of the proposed algorithm. The comparisons suggest that the results obtained are competitive with the best results reported in the literature for SONAR images denoising.
One of our future research works will be the inclusion in the proposed algorithm of the intrascale dependence of wavelet coefficients information. We believe that the idea of directional estimation windows proposed in [12] is a good candidate for this task. Another possible solution for the inclusion of the intrascale dependence is the consideration of the phase information provided by the DTCWT in a similar manner to the one recently proposed by Kingsbury [27], in association with the BLSGSM algorithm.
The research of new diversification mechanisms and of new synthesis techniques, like, for example, that proposed in [28], will represent other future directions for our team.
Finally we will provide a good theoretical explanation for the selection of the thresholds of the segmentation algorithm.
References
Donoho DL, Johnstone IM: Adapting to unknown smoothness via wavelet shrinkage. Journal of the American Statistical Association 1995,90(432):12001224. 10.2307/2291512
Luisier F, Blu T, Unser M: A new SURE approach to image denoising: interscale orthonormal wavelet thresholding. IEEE Transactions on Image Processing 2007,16(3):593606.
Cai Z, Cheng TH, Lu C, Subramanian KR: Efficient waveletbased image denoising algorithm. Electronics Letters 2001,37(11):683685. 10.1049/el:20010466
Chang SG, Yu B, Vetterli M: Adaptive wavelet thresholding for image denoising and compression. IEEE Transactions on Image Processing 2000,9(9):15321546. 10.1109/83.862633
Sendur L, Selesnick IW: Bivariate shrinkage with local variance estimation. IEEE Signal Processing Letters 2002,9(12):438441. 10.1109/LSP.2002.806054
Achim A, KuruoÇ§lu EE: Image denoising using bivariate Î± stable distributions in the complex wavelet domain. IEEE Signal Processing Letters 2005,12(1):1720.
Kingsbury N: Complex wavelets for shift invariant analysis and filtering of signals. Applied and Computational Harmonic Analysis 2001,10(3):234253. 10.1006/acha.2000.0343
Portilla J, Strela V, Wainwright MJ, Simoncelli EP: Image denoising using scale mixtures of Gaussians in the wavelet domain. IEEE Transactions on Image Processing 2003,12(11):13381351. 10.1109/TIP.2003.818640
Pizurica A, Philips W: Estimating the probability of the presence of a signal of interest in multiresolution single and multiband image denoising. IEEE Transactions on Image Processing 2006,15(3):654665.
Ghael SP, Sayeed AM, Baraniuk RG: Improved wavelet denoising via empirical wiener filtering. Wavelet Applications in Signal and Image Processing V, July 1997, San Diego, Calif, USA, Proceedings of SPIE 3169: 389399.
Kazubek M: Improving local Wiener filtering using matched filter. IEE Proceedings: Vision, Image and Signal Processing 2006,153(4):501506. 10.1049/ipvis:20045128
Shui PL: Image denoising algorithm via doubly local Wiener filtering with directional windows in wavelet domain. IEEE Signal Processing Letters 2005,12(10):681684.
Coifman R, Donoho D: Translation invariant denoising. In Wavelets and Statistics. Edited by: Antoniadis A, Oppenheim G. Springer, New York, NY, USA; 1995:125150.
Eslami R, Radha H: Optimal linear combination of denoising schemes for efficient removal of image artifacts. Proceedings of IEEE International Conference on Multimedia and Expo (ICME '06), July 2006 465468.
Buades A: Image and film denoising by nonlocal means, Ph.D. thesis. Universitat de les Iles Baleares; 2007.
Mahmoudi M, Sapiro G: Fast image and video denoising via nonlocal means of similar neighborhoods. IEEE Signal Processing Letters 2005,12(12):839842.
Gilboa G, Sochen N, Zeevi YY: Variational denoising of partly textured images by spatially varying constraints. IEEE Transactions on Image Processing 2006,15(8):22812289.
Yu TPY, Stoschek A, Donoho DL: Translation and directioninvariant denoising of 2D and 3D images: experience and algorithms. Wavelet Applications in Signal and Image Processing IV, August 1996, Denver, Colo, USA, Proceedings SPIE 2825: 608619.
Isar A, Isar D, Quinquis A: Multiscale MAP denoising of SAR and SAS images. Sea Technology 2007,48(2):4648.
Choi H, Baraniuk RG: Multiple wavelet basis image denoising using Besov ball projections. IEEE Signal Processing Letters 2004,11(9):712720.
Fablet R, Augustin JM: Variational multiwavelet restoration of noisy images. Proceedings of IEEE International Conference on Image Processing (ICIP '05), September 2005, Genoa, Italy 3: 305308.
Abdelnour A, Selesnick IW: Nearly symmetric orthogonal wavelet basis. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '01), May 2001, Salt Lake City, Utah, USA
Moga S, Isar A: SONAR image denoising using a Bayesian approach in the wavelet domain. Proceedings of International Conference on Advances in Stochastic Modelling and Data Analysis (ASMDA '07), MayJune 2007, Chania Crete, Greece 12751281.
Argenti F, Alparone L: Speckle removal from SAR images in the undecimated wavelet domain. IEEE Transactions on Geoscience and Remote Sensing 2002,40(11):23632374. 10.1109/TGRS.2002.805083
Achim A, KuruoÄŸlu EE, Zerubia J: SAR image filtering based on the heavytailed Rayleigh model. In Rapport. INRIA, Sophia Antipolis, France; February 2005.
Walessa M, Datcu M: Modelbased despeckling and information extraction from SAR images. IEEE Transactions on Geoscience and Remote Sensing 2000,38(5):22582269. 10.1109/36.868883
Miller M, Kingsbury N: Image denoising using derotated complex wavelet coefficients. IEEE Transactions on Image Processing 2008,17(9):15001511.
Wang H, Peng J, Wu W: Fusion algorithm for multisensor images based on discrete multiwavelet transform. IEE Proceedings: Vision, Image and Signal Processing 2002,149(5):283289. 10.1049/ipvis:20020612
Acknowledgments
We have established a collaboration with the specialists from the French Sea Institute, IFREMER, from Brest, with respect to the denoising of SONAR images. We acknowledge the importance of many hours spent in meetings with Xavier Lurton and JeanMarie Augustin dedicated to our introduction in SONAR technology. The authors would like to thank the anonymous reviewers for the very competent and efficient aid they have given. The research reported in this paper was developed in the framework of a grant of the Romanian Research Council (CNCSIS) with the title "Using Wavelets Theory for Decision Making" no. 349/13.01'09.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Isar, A., Moga, S. & Isar, D. A New Denoising System for SONAR Images. J Image Video Proc 2009, 173841 (2009). https://doi.org/10.1155/2009/173841
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1155/2009/173841