 Research
 Open Access
 Published:
A convex nonlocal total variation regularization algorithm for multiplicative noise removal
EURASIP Journal on Image and Video Processing volumeÂ 2019, ArticleÂ number:Â 28 (2019)
Abstract
This study proposes a nonlocal total variation restoration method to address multiplicative noise removal problems. The strictly convex, objective, nonlocal, total variation effectively utilizes prior information about the multiplicative noise and uses the maximum a posteriori estimator (MAP). An efficient iterative multivariable minimization algorithm is then designed to optimize our proposed model. Finally, we provide a rigorous convergence analysis of the alternating multivariable minimization iteration. The experimental results demonstrate that our proposed model outperforms other currently related models both in terms of evaluation indices and image visual quality.
1 Introduction
Image deblurring is an important task with numerous applications in both mathematics and image processing. Image deblurring is an inverse problem that determines the unknown original image u from the noisy image f. Total variation (TV) regularization methods are efficient for smoothing a noisy image while effectively preserving the image textures and edges [1, 2]. In recent years, a large number of TV methods have been extensively studied for additive noise removal [3, 4], most of which are convex variation models. The convex models can be optimized using simple and reliable numerical methods, such as the gradient descent [5], primaldual formulation [6], alternating direction method of multipliers [7], and Bregmanized operator splitting [8].
Multiplicative noise often exists in many coherent imaging systems, such as ultrasonic imaging, optical coherence tomography (OCT), synthetic aperture radar (SAR), and so on [9,10,11]. Speckle is the most essential characteristic of noisy images that are corrupted by multiplicative noise. For example, a radar sends coherent waves, and then the reflected scattered waves are captured by the radar sensor. The scattered waves are correlative and interfere with one another, resulting in the obtained image, which is degraded by speckle noise. Owing to the coherent characteristics of multiplicative noise, despeckle is more difficult than additive noise removal. If the statistical properties of multiplicative noise are known, multiplicative noise can be removed effectively. According to forming mechanism of multiplicative noise, many statistical distribution patterns of noise are found, such as Rayleigh noise model [12], Poisson noise model [13], Gaussian noise model [14], and Gamma noise model [15].
Over the last decade, some famous local TV approaches have been successfully used to remove multiplicative noise because of the edgepreserving property of the local TV regularizer. Rudin, Lions, and Osher (RLO) [14] proposed the first local total variational method for multiplicative Gaussian noise removal. Aubert and Aujol (AA) [15] composed a novel local TV model based on multiplicative noise and use the maximum a posteriori (MAP) to remove multiplicative Gamma noise. Shi and Osher (SO) [16] discussed the statistical characteristics of multiplicative noise and proposed a general local TV model for different multiplicative noise reduction, but the fidelity term in the model is not strictly convex. In order to overcome this particular drawback, Huang, Ng, and Wen (HNW) [17] utilized a log transformation and constructed a strictly convex local TV model that can be easily solved by the global optimal solution. Furthermore, reference [18] integrated a quadratic penalty function into local TV model and proposed a new convex variational model for low multiplicative noise removal. Reference [19] designed a convex model that is quite suitable for high multiplicative noise removal by combining a data fitting term, a quadratic penalty term, and a TV regularizer.
Unfortunately, owing to the local total variation regularization framework, smeared textures and numerous staircase effects frequently occur in the denoised image [20, 21]. Exploiting nonlocal correlation information of the image can improve performance of total variation and achieve better image denoising results [22, 23]. One of the wellknown nonlocalbased methods is the nonlocal means filter (NLM), which restores the image by using the nonlocal similarity patches. Nonlocal convex functions that were recently utilized as the regularization terms have been successfully used for multiplicative noise reduction [24, 25]. Reference [26] applied the nonlocal total variation (NLTV) norm to the AA model and proposed a new NLTVbased method for multiplicative noise reduction. Unfortunately, this model was nonconvex. Therefore, it is usually difficult to obtain a global solution. Dong et al. proposed a convex nonlocal TV model for multiplicative noise and introduced minimization iterative algorithms corresponding to the model [27]. Since the NLTV makes full use of selfsimilarity and redundancy within images, it has good image despeckling and denoising performance. However, the NLTV for multiplicative noise reduction is still an open area of research.
In this study, we concentrate on the Gammadistributed noise and propose a new NLTVbased model for multiplicative noise removal to overcome the drawbacks in current NLTVbased models. First, we utilize prior information regarding multiplicative noise and use the MAP estimation to formulate a novel, strictly convex NLTV model. To efficiently optimize our proposed model, we use split Bregman iteration method to design an alternating multivariable minimization iteration to optimize the convex model. We also provide a rigorous convergence analysis of the alternating iteration method. The experimental results demonstrate that the proposed NLTV model has better performance than some other NLTVbased models for multiplicative noise deblurring.
The following sections are organized as follows. The related NLTV methods are reviewed in Section 2. In Section 3, we propose a new NLTVbased model for multiplicative noise deblurring and design an alternating algorithm for optimizing our proposed model. In Section 4, we applied the proposed model to image deblurring to present its good performance. Finally, conclusions are provided in Section 5.
2 Overview of NLTV algorithms for multiplicative noise reduction
A blurred image contaminated by noise has higher total variation than the clean original image. Minimizing the total variation of noisy image can deblur the image and reduce the noise. The total variation function can be defined as
where âˆ‡u is the gradient of u. The first NLTV, regularizationbased image denoising was presented by Gilboa and Osher [28] for additive noise removal, which is described as follows:
Image denoising obtains the denoised image u^{âˆ—} by minimizing the above bounded energy function (2), which is composed of total variation term and fidelity term. Reducing the total variation term smooths the noisy image and minimizing the fidelity term makes denoised image similar to the original image. Î» is the regularization parameter that adjusts the balance between the two terms above. To date, NLTV methods for additive noise reduction have been extensively studied. However, multiplicative noise reduction by NLTV methods is still an open area of research. In this study, we provide the definitions of NLTV and review NLTV models for multiplicative noise reduction.
2.1 Nonlocal total variation
Let Î©â€‰âˆˆâ€‰R^{2} be a bounded domain and uâ€‰:â€‰Î©â€‰â†’â€‰R denote a real function. If (x,â€‰y)â€‰âˆˆâ€‰Î©â€‰Ã—â€‰Î© is a pair of points, then the nonlocal gradient âˆ‡u_{NL}(x,â€‰â‹…) at x can be defined by
where w(x,â€‰y) is a symmetric weight function that indicates the amount of similarity between the square patches centered at the points x and y. It can be defined by the following function
where G_{a} is a Gaussian of standard deviation a, and h is a filtering scale parameter. u(xâ€‰+â€‰â‹…) denotes a neighborhood patch centered on pixel x.
Therefore, the norm of the nonlocal gradient, nonlocal divergence, and graph Laplacian operators can be respectively defined as follows:
2.2 NLTV method for multiplicative noise reduction
Multiplicative noise removal aims to find the original image u from the observed noisy imagef. The image degradation model can be mathematically described as
where n denotes multiplicative noise. We assume that fâ€‰â‰¥â€‰0 and uâ€‰â‰¥â€‰0. The multiplicative noise follows a Gamma law with mean 1. Therefore, we obtain the density function of the noise n
where Î“ is a Gammafunction and L is a positive integer. The previous TV model for multiplicative noise removal presented by Aubert and Aujol was the AA model [15], which is the following minimization problem derived from MAP estimation:
The AA model is efficient for multiplicative noise removal. However, it has some problems since the local total variation regularization framework is exploited, such as smeared textures and the occurrence of staircase effects.
Motivated by the AA model, Li replaced the TV with the NLTV norm in the AA model and proposed the nonlocalAA model [26]:
Like the AA model, the nonlocalAA model also performed well for image denoising. However, it is convex only for uâ€‰âˆˆâ€‰(0,â€‰2f). As a result of the nonconvexity of the nonlocalAA model, it is usually difficult to obtain a global optimal solution. Inspired by the SO model [16], Dong et al. suggested the use of the log transformation (zâ€‰=â€‰logâ€‰u) to resolve the nonconvexity. The transformed variational model then becomes
We note that the above TV function is strictly convex. It is easy to obtain a global optimal solution and find the unique minimizer z for the minimization problem. This TV model is referred to as the exponential nonlocalSO model [27].
3 The proposed methodâ€”multiplicative denoising nonlocal total variation model
In our study, we obtain a strictly convex NLTV model for multiplicative noise removal and employ Bregman iteration to optimize it.
3.1 The proposed model
Assume the prior information about the mean and variance of the multiplicative noise are known in advance, that is,
where Nâ€‰=â€‰â€‰âˆ«â€‰1. The mean of the noise is 1 and the variance equals Ïƒ^{2}. In the aforementioned approaches, such as (10, 11, 12), only the density function of the noisy image is exploited for the MAP estimation to derive the minimization problem. The two previous constraints (mean and variance) are introduced into the nonlocalAA model, and we can improve the NLTV model and obtain the following constrained optimization problem:
Our goal is to solve the above equality constrained minimization problem (15). The constrained optimization problem is converted into an unconstrained formula:
The above constraint minimization problem can be simplified as
where a, b, and c are constrained parameters that are greater than 0. \( H(u)=\lambda \left(a\frac{f}{u}+\frac{b}{2}{\left(\frac{f}{u}\right)}^2+c\log u\right) \) is the fidelity term, which is continuous. We obtain
The initial data satisfy u(0)â€‰=â€‰f, and H(u) has a minimum at uâ€‰=â€‰u(0). We can obtain câ€‰=â€‰aâ€‰+â€‰b.
Unfortunately, this model is nonconvex. Similarly, we use the variable zâ€‰=â€‰logâ€‰u and, by replacing the regularizer âˆ‡_{NL}u with âˆ‡_{NL}z, we further convert (17) into the following minimization problem:
3.2 Bregman iteration for the proposed model
The above constraint (19) can be optimized by the iterationbased multivariable minimization algorithm. Note that the fidelity term in (19) contains exponential forms. Thus, we introduce an auxiliary variable p(pâ€‰=â€‰z) to split the problem into subproblems that are easier to solve. Equation (19) is then rewritten as the following constrained problem:
The minimization problem (20) is shown to be equivalent to (19). The constrained minimization function (20) can be transformed to the following unconstrained, multivariable optimization function:
There are two variables in the regularization function (21). Inspired by the core ideas of the split Bregman algorithm, we use the alternating minimization scheme and obtain the following two subproblems:
To solve the p subproblem, âˆ‡_{NL}p is replaced with d and the constraint is forced using the Bregman iteration process as follows:
where b is an auxiliary variable. The solution of (23) is obtained by performing the following alternative minimization subproblems:
To minimize function (25) by gradient descent, we derive the following optimality equation for p^{kâ€‰+â€‰1}
Using a GaussSeidel iterative scheme, p^{kâ€‰+â€‰1} is represented as
To solve d^{kâ€‰+â€‰1}, we use the softshrinkage formula [29] as follows
Optimizing z^{kâ€‰+â€‰1} is equivalent to solving the following EulerLagrange equation:
The Newton method is used to yield a fast solution:
All of these equations are combined and summarized in the algorithm that follows:
3.3 Bregman iteration for NLTV minimization
Initialization: u^{0}â€‰=â€‰logâ€‰f, p^{0}â€‰=â€‰z^{0}, b^{0}â€‰=â€‰d^{0}â€‰=â€‰0, kâ€‰=â€‰0 and Î», Î¼, Î³, tol
While â€–z^{kâ€‰+â€‰1}â€‰âˆ’â€‰z^{k}â€–_{2}/â€–z^{k}â€–_{2}â€‰>â€‰tol
End
3.3.1 Convergence analysis
We first analyze the convexity of the objective function to simplify our proof for the convergence of the minimization iteration schemes of our proposed model. We then prove that the sequence generated by the alternative iteration scheme converges to the minimum point of (21).
For the transformation zâ€‰=â€‰logâ€‰u, it is obvious that the second derivative of the fidelity term in (21) is afâ€‰exp(âˆ’z)â€‰+â€‰bf^{2}â€‰exp(âˆ’2z), which is always greater than zero. Therefore, this term is strictly convex in z.Next, we prove that the first term âˆ‡_{NL}z is also convex.
Assuming âˆ€k_{1}, k_{2}â€‰>â€‰0, k_{1}â€‰+â€‰k_{2}â€‰=â€‰1, âˆ€z_{1}, z_{2}, we obtain
Since the first term is also convex, (21) is strictly convex for all z. We obtain the denoised image z^{âˆ—} by minimizing the function and obtaining the global minimum point. We prove that the above alternative optimization subproblem algorithms converge to the global minimum point. Some fundamental criteria and properties of the alternative iteration minimum that are used to provide the convergence are displayed in [30, 31]. The alternative optimization subproblems are defined as
E_{2Î¼}(z,â€‰p) is convex and separately differentiable with respect to z and p. Suppose the unique minimizer of E_{2Î¼}(z,â€‰p) is \( \left(\tilde{z},\tilde{p}\right) \). We note that
This implies that \( \left(\tilde{z},\tilde{p}\right) \) is the minimizers of E_{2Î¼}. This signifies that \( \tilde{z}=R\left(\tilde{p}\right)=R\left(S\left(\tilde{z}\right)\right) \) and \( \tilde{p}=S\left(\tilde{z}\right)=S\left(R\left(\tilde{p}\right)\right) \). Therefore, \( \tilde{z} \) and \( \tilde{p} \) are the fixed points.
Since R(S(Â·)) is an alternative to the minimizer of E_{2Î¼}(z,â€‰p) and R(S(Â·)) is convex and nonexpansive, we obtain
This implies that \( \left\Vert {z}^k\tilde{z}\right\Vert \) is monotonically decreasing (note that z^{k} is a bounded sequence). Therefore, we deduce that z^{k} converges to a limit point \( \widehat{z} \), such that
Similarly, we obtain p^{k}, which converges to a limit point
The denoised image z^{âˆ—} is the unique minimizer of the constraint problem E_{1}(z). Let p^{âˆ—}â€‰=â€‰z^{âˆ—}, p^{âˆ—} and z^{âˆ—} represent the minimizers of the constraint problem E_{2}(z,â€‰p). Suppose subsequence \( \left\{{z}^{k_j}\right\}\subseteq {\left\{{z}^k\right\}}_{k=1}^{\infty } \) and \( \left\{{p}^{k_j}\right\}\subseteq {\left\{{p}^k\right\}}_{k=1}^{\infty } \) are convergent, which is a solution to minimize the energy of E_{2Î¼}(z,â€‰p). The following inequality is combined as
When k_{j}â€‰â†’â€‰â€‰âˆžâ€‰,
Since Î¼â€‰>â€‰0, then
Since z^{âˆ—},p^{âˆ—} is the unique solution to the minimization function E_{2}(z,â€‰p), we can deduce
Hence, Eq. (40) can be expressed as
This implies
Combining equations (37, 42, 44), we conclude that
In Eq. (45), we conclude z^{k} converges to z^{âˆ—}, which is the unique minimizer of E_{1}(z).
4 Experiment results and discussions
4.1 Experimental setting
In this subsection, we present some experimental results to demonstrate the effectiveness of our proposed model. We experiment on classical grayscale images and coherent imaging images contaminated by artificial multiplicative Gamma noise. Our proposed model is compared with several recent NLTVbased models, namely the nonlocalAA and nonlocalSO model. All simulations are performed in MATLAB9.0 on an Intel I7 PC with 4 GB of memory.
To reduce to the computation complexity, we only compute the ten best neighbors in the 21â€‰Ã—â€‰21 nonlocal searching window and four nearest neighbors in the 5â€‰Ã—â€‰5 patch. We set the stopping criterion tolâ€‰=â€‰0.001 to terminate iteration. The regularization parameters are fixed: Î¼â€‰=â€‰0.1, Î»â€‰=â€‰10, and Î³â€‰=â€‰20. To objectively estimate the quality of the denoised image, the peak signaltonoise ratio (PSNR) and structural similarity index (SSIM) are used, which are defined as
where Mâ€‰Ã—â€‰N is the size of the image, while u and \( \overset{\frown }{u} \) are, respectively, the original image and the recovered image. \( {\mu}_{\overset{\frown }{u}} \) and Î¼_{u} are the mean values of them; \( {\sigma}_{\overset{\frown }{u}} \) and Ïƒ_{u} denote their standard deviations; \( {\sigma}_{\overset{\frown }{u}u} \) is the covariance of \( \overset{\frown }{u} \) and u; and c_{1}, c_{2} are predefined constants.
4.2 Results on classical grayscale images with artificial noise
We use classical grayscale images with artificial multiplicative Gamma noise for the first test. Figure 1 shows four test images with resolutions of 256â€‰Ã—â€‰256â€‰Ã—â€‰8, which are used in our experiments. The original noisefree images in Fig. 1 are contaminated by multiplicative noise followed by the Gamma distribution of mean 1 and variance Ïƒ. In our proposed model, we set aâ€‰=â€‰bâ€‰=â€‰0.5 to effectively utilize the prior information about mean and variance of the noise.
Table 1 lists the PSNR and SSIM values to measure the denoising performance of different NLTVbased models on the test images blurred by different levels of Gamma noise. The highest PSNR and SSIM values are highlighted in italic font. From Table 1, it is apparent that our proposed NLTV model almost always attains the highest PSNR and SSIM values among all the NLTVbased multiplicative noise removal models. The nonlocalAA fails to be competitive with the other two methods for most cases. This is attributable to the fact that the nonlocalAA is nonconvex and it is difficult to compute the minimal point. The nonlocalSO almost has lower PSNR and SSIM than our method because nonlocalSO is not effective to consider the noise prior information about the mean and variance. Table 1 further shows that our proposed NLTV method effectively utilizes prior information about the multiplicative noise, and gains higher objective criteria values than the other methods.
To evaluate visual quality of the different models above, we present the denoised images that were restored by the different models on the noisy Lena image, which was blurred by adding multiplicative noise with a variance of 0.05. The restored images are inspected in Fig. 2. We find that the image (Fig. 2b) restored by the nonlocalAA method produces undesired white pointlike artifacts and suffers a loss of details and edges, the textures in the hat, and the facial region are seriously destroyed. The nonlocalSO method can reserve image structures and edges. However, it oversmooths image textures and eliminates details (see the wrinkles on hat in Fig. 2c). From the restored image (Fig. 2d) using our method, we note that our method reduces more multiplicative noise and better preserves textures and details than other methodsthe curly hair and the wrinkles on hat can be well distinguished. In Fig. 3, the 200th lines of the clean, noisy, and deblurred images that occur in Fig. 2 are presented. We observe that the line constructed by our method (Fig. 3d) is closer to the original line than those restored by the other methods.
To highlight the competitive visual performance of our proposed model for multiplicative noise removal, parts of the denoised images Cameraman and Woman are presented to measure the texturepreserving property of different models in Figs. 4 and 5, respectively. To further evaluate the performance of noise reducing and texture preserving, we also present the method noise images that are the difference between original image and denoised images in Figs. 4 and 5. The nonlocalAA results contain numerous white point and staircase effects in the smooth areas (Figs. 4b and 5b). The residual noise and the lost textures in the method noise images of nonlocalAA method (Figs. 4h and 5h) reveal that the nonlocalAA method provided limited noise suppression and detail damage. The nonlocalSO method (Figs. 4c and 5c) reduces the staircase effect and effectively removes noise (see smooth region Figs. 4c and 5c), but it blurs image edges and textures (cameraâ€™s edges in Fig. 4c and the hoodâ€™s texture in Fig. 5c are barely visible). Our proposed method exhibits the best visual appearance among all the NLTVbased methods so that the cameraâ€™s edges in Fig. 4d and the hoodâ€™s texture in Fig. 5d are clearly visible. Additionally, comparing the method noise images in Figs. 4 and 5, we can observe that our proposed method produces less residual noise and preserves more textures than other methods. The results shown in Figs. 4 to 5 let us conclude that our proposed model removes multiplicative noise while simultaneously reconstructing more image textures and details than the other model.
4.3 Results on images acquired by coherent imaging system
Since multiplicative Gamma noise often occurs in the coherent imaging systems, we compare the performance of our proposed model with other models on more complicated images acquired by coherent imaging technique where it is not easy to discern the foreground from the background. In this section, we use ultrasonic image, OCT image, and SAR image to verify the effectiveness of our proposed method.
Figure 6 shows visual comparisons of denoised images processed by different NLTVbased methods. The original ultrasonic image (resolution is 256â€‰Ã—â€‰256â€‰Ã—â€‰8) is added multiplicative Gamma noise with variance 0.1. Figure 4câ€“g respectively shows the denoised images using nonlocalAA method, nonlocalSO method, and our method. A texture region marked by redlined box is selected for visual comparison. Observing the edges and details in the redlined box, similar effects appeared in the above examples are regarded here. The nonlocalAA method cannot adequately remove the multiplicative Gamma noise. Some residual noise exists in the denoised image and the artifacts obscure the edges. The nonlocalSO method shows better performance than the nonlocalAA method, but some edges in the redlined box are seriously obscured or invisible. Our proposed method attains almost the highest PSNR and SSIM values and exhibits the best visual quality among all the methods, allowing the edges in the redlined box to clearly recovered and the residual noise is difficult to find.
Moreover, we use OCT (resolution is 128â€‰Ã—â€‰128â€‰Ã—â€‰8) and SAR (resolution is 128â€‰Ã—â€‰128â€‰Ã—â€‰8) images, which are contaminated by multiplicative Gamma noise with, respectively, variance 0.1 and 0.08, to further verify the effectiveness of our proposed method. To distinguish the differences in edge preservation and texture contrast enhancement between the different methods, a square region that contains salient edge and complicated texture is selected for enlarging views. The original and denoised images and the zoomed versions of the selected region marked with a red box are presented in Figs. 7 and 8. The noisy images (Figs. 7b and 8b) show that speckle noise reduces the image visual quality, resulting in barelyvisible textures and blurred edges. The results of the nonlocalAA method (Figs. 7c and 8c) cannot effectively reduce noise or obscure edges. The results of the nonlocalSO method (Figs. 7d and 8d) eliminate the artificial effect and improve the properties of nonlocalAA method, but blurs still exist in the denoised image, especially in the edges (see the zoomed version of (Figs. 7d and 8d). Since our method makes full use of prior information for noise and is an extension of the nonlocalSO method, it is more effective in removing the noise and preserving image details than the nonlocalSO method. The textures deblurred by our method (Figs. 7e and 8e) are clearer and more distinct than the textures deblurred by the nonlocalSO method. Additionally, our proposed method obtains larger PSNR and smaller MSE values the other methods, again indicating that the superiority of our proposed method in removing multiplicative Gamma noise is very appropriate for complicated images.
5 Conclusion
This study utilizes prior information and proposes a strictly convex NLTVbased multiplicative noise removal model based on the maximum prior estimate framework. Based on the split Bregman iteration algorithm, we design an efficient alternating minimization iteration to optimize our proposed NLTV model. We also prove that the alternative minimization iteration converges to a fixed point, which is the unique solution of the original minimization problem. Finally, results compared with related NLTVbased multiplicative noise removal models indicate that our proposed NLTV method effectively removes multiplicative noise and outperforms other related NLTV models.
The proposed method is suitable for multiplicative noise removal and successfully implements the coherent imaging system. However, a large number of predefined constants and parameters involved in the alternative iteration algorithm, values of these constants, as well as their parameters are important factors influencing the denoising result of the proposed method. In our experiment, these values are manually set. In later works, adaptively adjusting these parameters to obtain better denoising results will be a future research direction.
It is worth mentioning that the proposed method cannot be directly applied to other types of noise removal problems, such as mixed noise. For example, in the electronic microscopy imaging system, the captured images are usually contaminated by Gaussian and Poisson noises, which are combined as a superposition. Future research is required to make use of NLTV for different types noise removing, especially for mixed noise. On the other hand, the proposed method can be successfully implemented on video sequences, which is not present in this paper due to space limitations. However, we have simply focused on utilizing the correlation information in a single image for noise removal and have not considered similar content and correlation information in different images. There is a great correlation and a large number of redundant information existing between the adjacent frames in the video sequences. Accordingly, utilizing similar and redundant information in the video sequences to improve our proposed method is another direction for future research.
Abbreviations
 AA:

Aubert and Aujol
 HNW:

Huang, Ng, and Wen
 MAP:

Maximum a posteriori estimator
 NLM:

Nonlocal means filter
 NLTV:

Nonlocal total variation
 OCT:

Optical coherence tomography
 PSNR:

Peak signaltonoise ratio
 SAR:

Synthetic aperture radar
 SO:

Shi and Osher
 SSIM:

Structural similarity index
 TV:

Total variation
References
A.N. Tikhonov, V.Y. Arsenin, Solution of illposed problems[J]. Math. Comput. 32(144), 491â€“491 (1977).
R. Acar, C.R. Vogel, Analysis of bounded variation penalty methods for illposed problems[J]. Inverse Problems 10(6), 1217â€“1229 (1997).
L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms[J]. Physica D Nonlinear Phenomena 60(1â€“4), 259â€“268 (1992).
T. Chan, A. Marquina, P. Mulet, Highorder total variationbased image restoration[J]. SIAM J. Sci. Comput. 22(2), 503â€“516 (2000).
A. Beck, M. Teboulle, Fast gradientbased algorithms for constrained total variation image denoising and deblurring problems.[J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society 18(11), 2419â€“2434 (2009).
A. Chambolle, An Algorithm for Total Variation Minimization and Applications. J. Math. Imaging. Vision. 20(12), 8997 (2004).
N.B. Bras, J. Bioucasdias, R.C. Martins, et al., An alternating direction algorithm for total variation reconstruction of distributed parameters[J]. IEEE Trans. Image Process. 21(6), 3004â€“3016 (2012).
G. Li, X. Huang, S.G. Li, Adaptive Bregmanized total variation model for mixed noise removal[J]. AEU  International Journal of Electronics and Communications 80, 29â€“35 (2017).
L. Zhu, W. Wang, J. Qin, et al., Fast featurepreserving speckle reduction for ultrasound images via phase congruency[J]. Signal Process. 134, 275â€“284 (2017).
S. Adabi, S. Conforto, A. Clayton, et al., An intelligent speckle reduction algorithm for optical coherence tomography images[C]// international conference on photonics, optics and laser technology. IEEE 4, 38â€“43 (2017).
N. Tabassum, A. Vaccari, S. Acton, Speckle removal and change preservation by distancedriven anisotropic diffusion of synthetic aperture radar temporal stacks [J]. Digital Signal Processing 74, 43â€“55 (2018).
L. Denis, F. Tupin, J. Darbon, et al., SAR image regularization with fast approximate discrete minimization.[J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society 18(7), 1588 (2009).
S. Setzer, G. Steidl, T. Teuber, Deblurring Poissonian images by split Bregman techniques[J]. Journal of Visual Communication & Image Representation 21(3), 193â€“199 (2010).
L. Rudin, P.L. Lions, S. Osher, Multiplicative denoising and deblurring: theory and algorithms[M]// geometric level set methods in imaging, vision, and graphics (Springer, New York, 2003), pp. 103â€“119.
G. Aubert, J.F. Aujol, A variational approach to removing multiplicative noise[J]. SIAM J. Appl. Math. 68(4), 925â€“946 (2008).
J. Shi, S. Osher, A nonlinear inverse scale space method for a convex multiplicative noise model[J]. Siam Journal on Imaging Sciences 1(3), 294â€“321 (2008).
Y.M. Huang, M.K. Ng, Y.W. Wen, A new Total variation method for multiplicative noise removal[J]. Siam Journal on Imagingences 2(1), 20â€“40 (2009).
Y. Dong, T. Zeng, A convex variational model for restoring blurred images with multiplicative noise[J]. Siam Journal on Imaging Sciences 6(3), 1598â€“1625 (2013).
J. Lu, L. Shen, C. Xu, et al., Multiplicative noise removal in imaging: an expmodel and its fixedpoint proximity algorithm[J]. Applied & Computational Harmonic Analysis 41(2), 518â€“539 (2016).
S. Fu, C. Zhang, Adaptive nonconvex total variation regularization for image restoration[J]. Elect. Lett. 46(13), 907â€“908 (2010).
L.A. Vese, S.J. Osher, Modeling textures with total variation minimization and oscillating patterns in image processing[J]. J. Sci. Comput. 19(1â€“3), 553â€“572 (2003).
A. Buades, B. Coll, J.M. Morel, A review of image denoising algorithms, with a new one[J]. Siam Journal on Multiscale Modeling & Simulation 4(2), 490â€“530 (2005).
M. Chen, H. Zhang, G. Lin, et al., A new local and nonlocal total variation regularization model for image denoising[J]. Clust. Comput. 6, 1â€“17 (2018).
X. Nie, X. Huang, W. Feng, A new nonlocal TVbased variational model for SAR image despeckling based on the G0, distribution[J]. Digital Signal Processing 68, 44â€“56 (2017).
J. Li, Q. Yuan, H. Shen, et al., Hyperspectral image recovery employing a multidimensional nonlocal total variation model[J]. Signal Process. 111(C), 230â€“248 (2015).
L. Shuai, X. Zhao, G. Wang, Nonlocal TV model for multiplicative noise with Rayleigh distribution removal[J]. Chinese Journal of Scientific Instrument 36(7), 1570â€“1576 (2015).
F. Dong, H. Zhang, D.X. Kong, Nonlocal total variation models for multiplicative noise removal using split Bregman iteration[J]. Mathematical & Computer Modelling 55(3), 939â€“954 (2012).
G. Gilboa, S. Osher, Nonlocal operators with applications to image processing[J]. Siam Journal on Multiscale Modeling & Simulation 7(3), 1005â€“1028 (2008).
W. Li, Q. Li, W. Gong, et al., Total variation blind deconvolution employing split Bregman iteration[J]. Journal of Visual Communication & Image Representation 23(3), 409â€“417 (2012).
R.Q. Jia, H. Zhao, W. Zhao, Convergence analysis of the Bregman method for the variational model of image denoising[J]. Applied & Computational Harmonic Analysis 27(3), 367â€“379 (2009).
J.F. Cai, S. Osher, Z. Shen, Convergence of the linearized Bregman iteration for â„“1norm minimization[J]. Math. Comput. 78(268), 2127â€“2136 (2009).
Acknowledgements
The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.
Funding
This research was supported by the Open Fund Project of the Artificial Intelligence Key Laboratory of Sichuan Province (Grant no. 2016RYY02), and the Scientific Research Project of Sichuan University of Science and Engineering (Grant no. 2018RCL17 and no. 2015RC16).
Availability of data and materials
Request for authors.
Author information
Authors and Affiliations
Contributions
All authors take part in the discussion of the work described in this paper. The author MC conceived the idea, optimized the model, and did the experiments of the paper. HZ, QH, and CH were involved in the extensive discussions and evaluations, and all authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Authorsâ€™ information
Mingju Chen (1982) received the M.S. degree in College of Communication Engineering from Chongqing University of Posts and Telecommunications in 2007. He is currently pursuing the Ph.D. degree in Southwest University of Science and Technology. His research interests include machine vision inspection systems and image processing.
Hua Zhang (1969) received his PhD degree in College of Communication Engineering from Chongqing University in 2006. He is currently a professor in School of Information Engineering of Southwest University of Science and Technology. His research interests include nuclear detection technology, robot technology, and machine vision inspection systems.
Qiang Han(1987) received the B.S degree from Ocean university of China in 2010, and M.S. degree from Sichuan University of Science and Engineering in 2013. Now, he is currently pursuing his PhD degree in Southwest University of Science and Technology. His current research interests include consensus and coordination in multiagent systems, networked control system theory, and its application.
Chencheng Huang(1984) received BS degree in applied mathematics from Shijiazhuang Tiedao University in 2007, Master degree in applied mathematics from Chongqing University in 2011, and a PhD degree from Chongqing University in 2015. He is currently a lecturer with School of Automation and Information Engineering of Sichuan University of Science and Engineering. His research interests are image processing.
Competing interests
The authors declare that they have no competing interests.
Publisherâ€™s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Chen, M., Zhang, H., Han, Q. et al. A convex nonlocal total variation regularization algorithm for multiplicative noise removal. J Image Video Proc. 2019, 28 (2019). https://doi.org/10.1186/s1364001904102
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1364001904102
Keywords
 Multiplicative noise
 Nonlocal total variation
 Alternating minimization problem
 Maximum a posteriori estimation