Skip to main content

A convex nonlocal total variation regularization algorithm for multiplicative noise removal

Abstract

This study proposes a nonlocal total variation restoration method to address multiplicative noise removal problems. The strictly convex, objective, nonlocal, total variation effectively utilizes prior information about the multiplicative noise and uses the maximum a posteriori estimator (MAP). An efficient iterative multivariable minimization algorithm is then designed to optimize our proposed model. Finally, we provide a rigorous convergence analysis of the alternating multivariable minimization iteration. The experimental results demonstrate that our proposed model outperforms other currently related models both in terms of evaluation indices and image visual quality.

1 Introduction

Image deblurring is an important task with numerous applications in both mathematics and image processing. Image deblurring is an inverse problem that determines the unknown original image u from the noisy image f. Total variation (TV) regularization methods are efficient for smoothing a noisy image while effectively preserving the image textures and edges [1, 2]. In recent years, a large number of TV methods have been extensively studied for additive noise removal [3, 4], most of which are convex variation models. The convex models can be optimized using simple and reliable numerical methods, such as the gradient descent [5], primal-dual formulation [6], alternating direction method of multipliers [7], and Bregmanized operator splitting [8].

Multiplicative noise often exists in many coherent imaging systems, such as ultrasonic imaging, optical coherence tomography (OCT), synthetic aperture radar (SAR), and so on [9,10,11]. Speckle is the most essential characteristic of noisy images that are corrupted by multiplicative noise. For example, a radar sends coherent waves, and then the reflected scattered waves are captured by the radar sensor. The scattered waves are correlative and interfere with one another, resulting in the obtained image, which is degraded by speckle noise. Owing to the coherent characteristics of multiplicative noise, despeckle is more difficult than additive noise removal. If the statistical properties of multiplicative noise are known, multiplicative noise can be removed effectively. According to forming mechanism of multiplicative noise, many statistical distribution patterns of noise are found, such as Rayleigh noise model [12], Poisson noise model [13], Gaussian noise model [14], and Gamma noise model [15].

Over the last decade, some famous local TV approaches have been successfully used to remove multiplicative noise because of the edge-preserving property of the local TV regularizer. Rudin, Lions, and Osher (RLO) [14] proposed the first local total variational method for multiplicative Gaussian noise removal. Aubert and Aujol (AA) [15] composed a novel local TV model based on multiplicative noise and use the maximum a posteriori (MAP) to remove multiplicative Gamma noise. Shi and Osher (SO) [16] discussed the statistical characteristics of multiplicative noise and proposed a general local TV model for different multiplicative noise reduction, but the fidelity term in the model is not strictly convex. In order to overcome this particular drawback, Huang, Ng, and Wen (HNW) [17] utilized a log transformation and constructed a strictly convex local TV model that can be easily solved by the global optimal solution. Furthermore, reference [18] integrated a quadratic penalty function into local TV model and proposed a new convex variational model for low multiplicative noise removal. Reference [19] designed a convex model that is quite suitable for high multiplicative noise removal by combining a data fitting term, a quadratic penalty term, and a TV regularizer.

Unfortunately, owing to the local total variation regularization framework, smeared textures and numerous staircase effects frequently occur in the denoised image [20, 21]. Exploiting nonlocal correlation information of the image can improve performance of total variation and achieve better image denoising results [22, 23]. One of the well-known nonlocal-based methods is the nonlocal means filter (NLM), which restores the image by using the nonlocal similarity patches. Nonlocal convex functions that were recently utilized as the regularization terms have been successfully used for multiplicative noise reduction [24, 25]. Reference [26] applied the nonlocal total variation (NLTV) norm to the AA model and proposed a new NLTV-based method for multiplicative noise reduction. Unfortunately, this model was nonconvex. Therefore, it is usually difficult to obtain a global solution. Dong et al. proposed a convex nonlocal TV model for multiplicative noise and introduced minimization iterative algorithms corresponding to the model [27]. Since the NLTV makes full use of self-similarity and redundancy within images, it has good image despeckling and denoising performance. However, the NLTV for multiplicative noise reduction is still an open area of research.

In this study, we concentrate on the Gamma-distributed noise and propose a new NLTV-based model for multiplicative noise removal to overcome the drawbacks in current NLTV-based models. First, we utilize prior information regarding multiplicative noise and use the MAP estimation to formulate a novel, strictly convex NLTV model. To efficiently optimize our proposed model, we use split Bregman iteration method to design an alternating multivariable minimization iteration to optimize the convex model. We also provide a rigorous convergence analysis of the alternating iteration method. The experimental results demonstrate that the proposed NLTV model has better performance than some other NLTV-based models for multiplicative noise deblurring.

The following sections are organized as follows. The related NLTV methods are reviewed in Section 2. In Section 3, we propose a new NLTV-based model for multiplicative noise deblurring and design an alternating algorithm for optimizing our proposed model. In Section 4, we applied the proposed model to image deblurring to present its good performance. Finally, conclusions are provided in Section 5.

2 Overview of NLTV algorithms for multiplicative noise reduction

A blurred image contaminated by noise has higher total variation than the clean original image. Minimizing the total variation of noisy image can deblur the image and reduce the noise. The total variation function can be defined as

$$ {J}_{TV}(u)={\left|\nabla u\right|}_1. $$
(1)

where ∇u is the gradient of u. The first NLTV, regularization-based image denoising was presented by Gilboa and Osher [28] for additive noise removal, which is described as follows:

$$ u=\arg \kern0.4em \underset{u}{\min}\kern0.3em \left(\kern0.1em |{\nabla}_{NL}u|+\frac{\lambda }{2}\parallel f-u{\parallel}_2^2\right). $$
(2)

Image denoising obtains the denoised image u∗ by minimizing the above bounded energy function (2), which is composed of total variation term and fidelity term. Reducing the total variation term smooths the noisy image and minimizing the fidelity term makes denoised image similar to the original image. λ is the regularization parameter that adjusts the balance between the two terms above. To date, NLTV methods for additive noise reduction have been extensively studied. However, multiplicative noise reduction by NLTV methods is still an open area of research. In this study, we provide the definitions of NLTV and review NLTV models for multiplicative noise reduction.

2.1 Nonlocal total variation

Let Ω ∈ R2 be a bounded domain and u : Ω → R denote a real function. If (x, y) ∈ Ω × Ω is a pair of points, then the nonlocal gradient ∇uNL(x, ⋅) at x can be defined by

$$ \nabla {u}_{NL}\left(x,y\right)=\left(u(x)-u(y)\right)\sqrt{w\left(x,y\right)} $$
(3)

where w(x, y) is a symmetric weight function that indicates the amount of similarity between the square patches centered at the points x and y. It can be defined by the following function

$$ {w}_{xy}=\exp \left\{\frac{G_a\ast \left(\parallel u\left(x+\cdot \right)-u\left(y+\cdot \right){\parallel}^2\right)}{2{h}^2}\right\} $$
(4)

where Ga is a Gaussian of standard deviation a, and h is a filtering scale parameter. u(x + ⋅) denotes a neighborhood patch centered on pixel x.

Therefore, the norm of the nonlocal gradient, nonlocal divergence, and graph Laplacian operators can be respectively defined as follows:

$$ \mid \nabla {u}_{NL}\mid =\sqrt{\int_{\Omega}{\left(u(x)-u(y)\right)}^2{w}_{xy}\Big)dy}. $$
(5)
$$ {\operatorname{div}}_{NL}\overrightarrow{u}(x):= {\int}_{\Omega}\left(u\left(x,y\right)-u\left(y,x\right)\right)\sqrt{w_{xy}\Big)} dy. $$
(6)
$$ \varDelta {u}_{NL}:= \frac{1}{2}{\operatorname{div}}_{NL}\left(\nabla {u}_{NL}\right)={\int}_{\Omega}\left(u\left(x,y\right)-u\left(y,x\right)\right){w}_{xy}\Big) dy. $$
(7)

2.2 NLTV method for multiplicative noise reduction

Multiplicative noise removal aims to find the original image u from the observed noisy imagef. The image degradation model can be mathematically described as

$$ f= un $$
(8)

where n denotes multiplicative noise. We assume that f ≥ 0 and u ≥ 0. The multiplicative noise follows a Gamma law with mean 1. Therefore, we obtain the density function of the noise n

$$ g(n)=\frac{L^L}{\Gamma (L)}{n}^{L-1}\exp \left(- Ln\right) $$
(9)

where Γ is a Gamma-function and L is a positive integer. The previous TV model for multiplicative noise removal presented by Aubert and Aujol was the AA model [15], which is the following minimization problem derived from MAP estimation:

$$ u=\arg \kern0.2em \underset{u}{\min}\left\{\left|\nabla u\right|+\lambda \left(\log u+\frac{f}{u}\right)\right\}. $$
(10)

The AA model is efficient for multiplicative noise removal. However, it has some problems since the local total variation regularization framework is exploited, such as smeared textures and the occurrence of staircase effects.

Motivated by the AA model, Li replaced the TV with the NLTV norm in the AA model and proposed the nonlocal-AA model [26]:

$$ u=\arg \kern0.2em \underset{u}{\min}\left\{\left|{\nabla}_{NL}u\right|+\lambda \left(\log u+\frac{f}{u}\right)\right\}. $$
(11)

Like the AA model, the nonlocal-AA model also performed well for image denoising. However, it is convex only for u ∈ (0, 2f). As a result of the nonconvexity of the nonlocal-AA model, it is usually difficult to obtain a global optimal solution. Inspired by the SO model [16], Dong et al. suggested the use of the log transformation (z = log u) to resolve the nonconvexity. The transformed variational model then becomes

$$ \underset{z}{\min}\left(\parallel {\nabla}_{NL}z{\parallel}_1+\lambda \underset{\varOmega }{\int}\left(z-{fe}^{-z}\right)\right). $$
(12)

We note that the above TV function is strictly convex. It is easy to obtain a global optimal solution and find the unique minimizer z for the minimization problem. This TV model is referred to as the exponential nonlocal-SO model [27].

3 The proposed method—multiplicative denoising nonlocal total variation model

In our study, we obtain a strictly convex NLTV model for multiplicative noise removal and employ Bregman iteration to optimize it.

3.1 The proposed model

Assume the prior information about the mean and variance of the multiplicative noise are known in advance, that is,

$$ \frac{1}{N}\int \eta =1, $$
(13)
$$ \frac{1}{N}\int {\left(\eta -1\right)}^2={\sigma}^2 $$
(14)

where N =  ∫ 1. The mean of the noise is 1 and the variance equals σ2. In the aforementioned approaches, such as (10, 11, 12), only the density function of the noisy image is exploited for the MAP estimation to derive the minimization problem. The two previous constraints (mean and variance) are introduced into the nonlocal-AA model, and we can improve the NLTV model and obtain the following constrained optimization problem:

$$ u=\arg \kern0.2em \underset{u}{\min}\left\{\left|\nabla {u}_{NL}\right|+\lambda \left(\log u+\frac{f}{u}\right)\right\}\kern0.6em s.t\kern0.3em \left(\kern0.1em \frac{1}{N}\int \frac{f}{u}=1\kern0.4em \mathrm{and}\kern0.3em \frac{1}{N}\int {\left(\frac{f}{u}-1\right)}^2={\sigma}^2\right)\kern0.4em . $$
(15)

Our goal is to solve the above equality constrained minimization problem (15). The constrained optimization problem is converted into an unconstrained formula:

$$ u=\arg \kern0.2em \underset{u}{\min}\left\{\left|\nabla {u}_{NL}\right|+\left({\lambda}_1\int \log u+\frac{f}{u}+{\lambda}_2\int \frac{f}{u}+{\lambda}_3\int {\left(\frac{f}{u}-1\right)}^2\right)\right\}\kern0.7em . $$
(16)

The above constraint minimization problem can be simplified as

$$ u=\arg \kern0.2em \underset{u}{\min}\left\{\left|{\nabla}_{NL}u\right|+\lambda \int \left(a\frac{f}{u}+\frac{b}{2}{\left(\frac{f}{u}\right)}^2+c\log u\right)\right\}\kern0.7em $$
(17)

where a, b, and c are constrained parameters that are greater than 0. \( H(u)=\lambda \left(a\frac{f}{u}+\frac{b}{2}{\left(\frac{f}{u}\right)}^2+c\log u\right) \) is the fidelity term, which is continuous. We obtain

$$ \frac{\partial H(u)}{\partial u}=a\frac{f}{u^2}+b\frac{f^2}{u^3}-c\frac{1}{u}. $$
(18)

The initial data satisfy u(0) = f, and H(u) has a minimum at u = u(0). We can obtain c = a + b.

Unfortunately, this model is nonconvex. Similarly, we use the variable z = log u and, by replacing the regularizer |∇NLu| with |∇NLz|, we further convert (17) into the following minimization problem:

$$ u=\arg \kern0.3em \underset{z}{\min }{E}_1(z)=\arg \kern0.3em \underset{z}{\min}\left|{\nabla}_{NL}z\right|+\lambda \sum \left({afe}^{-z}+\frac{b}{2}{f}^2{e}^{-2z}+\left(a+b\right)z\right). $$
(19)

3.2 Bregman iteration for the proposed model

The above constraint (19) can be optimized by the iteration-based multivariable minimization algorithm. Note that the fidelity term in (19) contains exponential forms. Thus, we introduce an auxiliary variable p(p = z) to split the problem into subproblems that are easier to solve. Equation (19) is then rewritten as the following constrained problem:

$$ u=\arg \kern0.3em \underset{z,p}{\min }{E}_2=\arg \kern0.3em \underset{z,p}{\min}\left|{\nabla}_{NL}p\right|+\lambda \sum \left({afe}^{-z}+\frac{b}{2}{f}^2{e}^{-2z}+\left(a+b\right)z\right),\kern1em s.t.\kern0.7em p=z. $$
(20)

The minimization problem (20) is shown to be equivalent to (19). The constrained minimization function (20) can be transformed to the following unconstrained, multi-variable optimization function:

$$ u=\arg \kern0.3em \underset{z,p}{\min }{E}_{2\mu }=\arg \kern0.3em \underset{z,p}{\min}\left|{\nabla}_{NL}p\right|+\frac{\mu }{2}{\left\Vert p-z\right\Vert}_2^2+\lambda \sum \left({afe}^{-z}+\frac{b}{2}{f}^2{e}^{-2z}+\left(a+b\right)z\right) $$
(21)

There are two variables in the regularization function (21). Inspired by the core ideas of the split Bregman algorithm, we use the alternating minimization scheme and obtain the following two subproblems:

$$ \Big\{{\displaystyle \begin{array}{l}{\min}_p{E}_{2p}(p)=\parallel {\nabla}_{NL}p{\parallel}_1+\frac{\mu }{2}\parallel p-z{\parallel}_2^2\\ {}{\min}_z{E}_{2z}(p)=\frac{\mu }{2}\parallel p-z{\parallel}_2^2+\lambda \sum \left(\left({afe}^{-z}+\frac{b}{2}{f}^2{e}^{-2z}+\left(a+b\right)z\right)\right)\end{array}}\operatorname{}. $$
(22)

To solve the p subproblem, ∇NLp is replaced with d and the constraint is forced using the Bregman iteration process as follows:

$$ \left({p}^{k+1},{d}^{k+1}\right)=\arg \underset{p,d}{\min}\left(\left\Vert {d}_1\right\Vert +\frac{\gamma }{2}{\left\Vert d-{\nabla}_{NL}p-{b}^k\right\Vert}_2^2+\frac{\mu }{2}{\left\Vert p-z\right\Vert}_2^2\right), $$
(23)
$$ {b}^{k+1}={b}^k+{\nabla}_{NL}{p}^{k+1}-{d}^{k+1} $$
(24)

where b is an auxiliary variable. The solution of (23) is obtained by performing the following alternative minimization subproblems:

$$ {p}^{k+1}=\arg \kern0.3em \underset{p}{\min}\frac{\gamma }{2}{\left\Vert {d}^k-{\nabla}_{NL}p-{b}^k\right\Vert}_2^2+\frac{\mu }{2}{\left\Vert p-{z}^k\right\Vert}_2^2, $$
(25)
$$ {d}^{k+1}=\arg \kern0.3em \underset{d}{\min}\left\Vert {d}_1\right\Vert +\frac{\gamma }{2}{\left\Vert d-{\nabla}_{NL}{p}^{k+1}-{b}^k\right\Vert}_2^2. $$
(26)

To minimize function (25) by gradient descent, we derive the following optimality equation for pk + 1

$$ -\gamma {\mathit{\operatorname{div}}}_{NL}\left({d}^k-{\nabla}_{NL}p-{b}^k\right)-\mu \left(p-{z}^k\right)=0. $$
(27)

Using a Gauss-Seidel iterative scheme, pk + 1 is represented as

$$ {p}_i^{k+1}=\frac{1}{\mu +\gamma \sum {w}_{ij}}\left(r\sum {w}_{ij}{p}_j^k+\mu {z}_i^k-\varphi \sum \sqrt{w_{ij}}\left({d}_{ij}^k-{b}_{ij}^k-{d}_{ji}^k+{b}_{ji}^k\right)\right). $$
(28)

To solve dk + 1, we use the soft-shrinkage formula [29] as follows

$$ {d}^{k+1}=\frac{\nabla_{NL}{p}^{k+1}+{b}^k}{\left|{\nabla}_{NL}{p}^{k+1}+{b}^k\right|}\max \left(\left|{\nabla}_{NL}{p}^{k+1}+{b}^k\right|-\frac{1}{\gamma },0\right). $$
(29)

Optimizing zk + 1 is equivalent to solving the following Euler-Lagrange equation:

$$ \mu \left(z-{p}^{k+1}\right)+\lambda \left(\left(a+b\right)-{afe}^{-z}-{bf}^2{e}^{-2z}\right)=0. $$
(30)

The Newton method is used to yield a fast solution:

$$ {z}^{k+1}={z}^k-\frac{\mu \left(z-{p}^{k+1}\right)+\lambda \left(\left(a+b\right)-{afe}^{-z}-{bf}^2{e}^{-2z}\right)}{u+\lambda \left({afe}^{-z}+2{bf}^2{e}^{-2z}\right)}. $$
(31)

All of these equations are combined and summarized in the algorithm that follows:

3.3 Bregman iteration for NLTV minimization

Initialization: u0 = log f, p0 = z0, b0 = d0 = 0, k = 0 and λ, μ, γ, tol

While ‖zk + 1 − zk‖2/‖zk‖2 > tol

$$ {p}_i^{k+1}=\frac{1}{\mu +\gamma \sum {w}_{ij}}\left(r\sum {w}_{ij}{p}_j^k+\mu {z}_i^k-\varphi \sum \sqrt{w_{ij}}\left({d}_{ij}^k-{b}_{ij}^k-{d}_{ji}^k+{b}_{ji}^k\right)\right), $$
$$ {d}^{k+1}=\frac{\nabla_{NL}{p}^{k+1}+{b}^k}{\left|{\nabla}_{NL}{p}^{k+1}+{b}^k\right|}\max \left(\left|{\nabla}_{NL}{p}^{k+1}+{b}^k\right|-\frac{1}{\gamma },0\right), $$
$$ {b}^{k+1}={b}^k+{\nabla}_{NL}{p}^{k+1}-{d}^{k+1}, $$
$$ {z}^{k+1}={z}^k-\frac{\mu \left(z-{p}^{k+1}\right)+\lambda \left(\left(a+b\right)-{afe}^{-z}-{bf}^2{e}^{-2z}\right)}{\mu +\lambda \left({afe}^{-z}+2{bf}^2{e}^{-2z}\right)}, $$

End

3.3.1 Convergence analysis

We first analyze the convexity of the objective function to simplify our proof for the convergence of the minimization iteration schemes of our proposed model. We then prove that the sequence generated by the alternative iteration scheme converges to the minimum point of (21).

For the transformation z = log u, it is obvious that the second derivative of the fidelity term in (21) is af exp(−z) + bf2 exp(−2z), which is always greater than zero. Therefore, this term is strictly convex in z.Next, we prove that the first term |∇NLz| is also convex.

Assuming ∀k1, k2 > 0, k1 + k2 = 1, ∀z1, z2, we obtain

$$ {\displaystyle \begin{array}{l}\left|{\nabla}_{NL}\left({k}_1{z}_1+{k}_2{z}_2\right)\right|={\left(\sum \limits_j{\left({k}_1{z}_{1j}+{k}_2{z}_{2j}-{k}_1{z}_{1i}-{k}_2{z}_{2i}\right)}^2{w}_{ij}\right)}^{\frac{1}{2}}\\ {}\kern6.099997em \le {k}_1{\left(\sum \limits_j{\left({z}_{1j}-{z}_{1i}\right)}^2{w}_{ij}\right)}^{\frac{1}{2}}+{k}_2{\left(\sum \limits_j{\left({z}_{2j}-{z}_{2i}\right)}^2{w}_{ij}\right)}^{\frac{1}{2}}.\\ {}\kern5.999997em ={k}_1{\left|{\nabla}_{NL}{z}_1\right|}_i+{k}_2{\left|{\nabla}_{NL}{z}_2\right|}_i\\ {}\kern6.099997em ={k}_1\left|{\nabla}_{NL}\left({k}_1{z}_1\right)\right|+{k}_2\left|{\nabla}_{NL}\left({k}_2{z}_2\right)\right|\end{array}} $$
(32)

Since the first term is also convex, (21) is strictly convex for all z. We obtain the denoised image z∗ by minimizing the function and obtaining the global minimum point. We prove that the above alternative optimization subproblem algorithms converge to the global minimum point. Some fundamental criteria and properties of the alternative iteration minimum that are used to provide the convergence are displayed in [30, 31]. The alternative optimization subproblems are defined as

$$ {p}^{k+1}=S\left({z}^k\right)=S\left(R\left({p}^k\right)\right), $$
(33)
$$ {z}^{k+1}=R\left({p}^{k+1}\right)=R\left(S\left({z}^k\right)\right). $$
(34)

E2μ(z, p) is convex and separately differentiable with respect to z and p. Suppose the unique minimizer of E2μ(z, p) is \( \left(\tilde{z},\tilde{p}\right) \). We note that

$$ \left\{\begin{array}{c}\frac{\partial {E}_{2\mu}\left(\tilde{z},\tilde{p}\right)}{\partial z}=0\\ {}\frac{\partial {E}_{2\mu}\left(\tilde{z},\tilde{p}\right)}{\partial p}=0\end{array}\right.. $$
(35)

This implies that \( \left(\tilde{z},\tilde{p}\right) \) is the minimizers of E2μ. This signifies that \( \tilde{z}=R\left(\tilde{p}\right)=R\left(S\left(\tilde{z}\right)\right) \) and \( \tilde{p}=S\left(\tilde{z}\right)=S\left(R\left(\tilde{p}\right)\right) \). Therefore, \( \tilde{z} \) and \( \tilde{p} \) are the fixed points.

Since R(S(·)) is an alternative to the minimizer of E2μ(z, p) and R(S(·)) is convex and nonexpansive, we obtain

$$ \left\Vert {z}^{k+1}-\tilde{z}\right\Vert =\left\Vert R\left(S\left({z}^k\right)\right)-R\left(S\left(\tilde{z}\right)\right)\right\Vert =\left\Vert R\left(S\left({z}^k\right)\right)-\tilde{z}\right\Vert \le \left\Vert {z}^k-\tilde{z}\right\Vert . $$
(36)

This implies that \( \left\Vert {z}^k-\tilde{z}\right\Vert \) is monotonically decreasing (note that zk is a bounded sequence). Therefore, we deduce that zk converges to a limit point \( \widehat{z} \), such that

$$ \underset{k\to \infty }{\lim }{z}^k=\widehat{z}. $$
(37)

Similarly, we obtain pk, which converges to a limit point

$$ \underset{k\to \infty }{\lim }{p}^k=\widehat{p}. $$
(38)

The denoised image z∗ is the unique minimizer of the constraint problem E1(z). Let p∗ = z∗, p∗ and z∗ represent the minimizers of the constraint problem E2(z, p). Suppose subsequence \( \left\{{z}^{k_j}\right\}\subseteq {\left\{{z}^k\right\}}_{k=1}^{\infty } \) and \( \left\{{p}^{k_j}\right\}\subseteq {\left\{{p}^k\right\}}_{k=1}^{\infty } \) are convergent, which is a solution to minimize the energy of E2μ(z, p). The following inequality is combined as

$$ {E}_{2\mu}\left({z}^{k_j},{p}^{k_j}\right)={E}_2\left({z}^{k_j}\right)+\frac{\mu }{2}{\left\Vert {p}^{k_j}-{z}^{k_j}\right\Vert}_2^2\le {E}_{2\mu}\left({z}^{\ast },{p}^{\ast}\right)={E}_2\left({z}^{\ast },{p}^{\ast}\right). $$
(39)

When kj →  ∞ ,

$$ {\left\Vert \widehat{p}-\widehat{z}\right\Vert}_2^2\le \frac{2}{\mu}\left({E}_2\left(\widehat{z},\widehat{p}\right)-{E}_2\left({z}^{\ast },{p}^{\ast}\right)\right). $$
(40)

Since μ > 0, then

$$ {E}_2\left(\widehat{z},\widehat{p}\right)\le {E}_2\left({z}^{\ast },{p}^{\ast}\right). $$
(41)

Since z∗,p∗ is the unique solution to the minimization function E2(z, p), we can deduce

$$ {E}_2\left(\widehat{z},\widehat{p}\right)={E}_2\left({z}^{\ast },{p}^{\ast}\right). $$
(42)

Hence, Eq. (40) can be expressed as

$$ \underset{k\to \infty }{\lim }{\left\Vert \widehat{p}-\widehat{z}\right\Vert}_2^2=0. $$
(43)

This implies

$$ \widehat{p}=\widehat{z}. $$
(44)

Combining equations (37, 42, 44), we conclude that

$$ \underset{k\to \infty }{\lim }{z}^k=\widehat{z}={z}^{\ast }. $$
(45)

In Eq. (45), we conclude zk converges to z∗, which is the unique minimizer of E1(z).

4 Experiment results and discussions

4.1 Experimental setting

In this subsection, we present some experimental results to demonstrate the effectiveness of our proposed model. We experiment on classical grayscale images and coherent imaging images contaminated by artificial multiplicative Gamma noise. Our proposed model is compared with several recent NLTV-based models, namely the nonlocal-AA and nonlocal-SO model. All simulations are performed in MATLAB9.0 on an Intel I7 PC with 4 GB of memory.

To reduce to the computation complexity, we only compute the ten best neighbors in the 21 × 21 nonlocal searching window and four nearest neighbors in the 5 × 5 patch. We set the stopping criterion tol = 0.001 to terminate iteration. The regularization parameters are fixed: μ = 0.1, λ = 10, and γ = 20. To objectively estimate the quality of the denoised image, the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) are used, which are defined as

$$ \mathrm{PSNR}(dB)=10{\log}_{10}\frac{255^2\times M\times N}{{\left\Vert u-\overset{\frown }{u}\right\Vert}_2^2}, $$
$$ \mathrm{SSIM}=\frac{2{\mu}_{\overset{\frown }{u}}{\mu}_u\left(2{\sigma}_{\overset{\frown }{u}u}+{c}_2\right)}{\left({\mu}_{\overset{\frown }{u}}^2+{\mu}_u^2+{c}_1\right)\left({\sigma}_u^2+{\sigma}_{\overset{\frown }{u}}^2+{c}_2\right)} $$

where M × N is the size of the image, while u and \( \overset{\frown }{u} \) are, respectively, the original image and the recovered image. \( {\mu}_{\overset{\frown }{u}} \) and μu are the mean values of them; \( {\sigma}_{\overset{\frown }{u}} \) and σu denote their standard deviations; \( {\sigma}_{\overset{\frown }{u}u} \) is the covariance of \( \overset{\frown }{u} \) and u; and c1, c2 are predefined constants.

4.2 Results on classical grayscale images with artificial noise

We use classical grayscale images with artificial multiplicative Gamma noise for the first test. Figure 1 shows four test images with resolutions of 256 × 256 × 8, which are used in our experiments. The original noise-free images in Fig. 1 are contaminated by multiplicative noise followed by the Gamma distribution of mean 1 and variance σ. In our proposed model, we set a = b = 0.5 to effectively utilize the prior information about mean and variance of the noise.

Fig. 1
figure 1

Original images for experiments. a Lena image. b Women image. c Baboon image. d Cameraman image

Table 1 lists the PSNR and SSIM values to measure the denoising performance of different NLTV-based models on the test images blurred by different levels of Gamma noise. The highest PSNR and SSIM values are highlighted in italic font. From Table 1, it is apparent that our proposed NLTV model almost always attains the highest PSNR and SSIM values among all the NLTV-based multiplicative noise removal models. The nonlocal-AA fails to be competitive with the other two methods for most cases. This is attributable to the fact that the nonlocal-AA is nonconvex and it is difficult to compute the minimal point. The nonlocal-SO almost has lower PSNR and SSIM than our method because nonlocal-SO is not effective to consider the noise prior information about the mean and variance. Table 1 further shows that our proposed NLTV method effectively utilizes prior information about the multiplicative noise, and gains higher objective criteria values than the other methods.

Table 1 Comparisons of the results using different models based on different images

To evaluate visual quality of the different models above, we present the denoised images that were restored by the different models on the noisy Lena image, which was blurred by adding multiplicative noise with a variance of 0.05. The restored images are inspected in Fig. 2. We find that the image (Fig. 2b) restored by the nonlocal-AA method produces undesired white point-like artifacts and suffers a loss of details and edges, the textures in the hat, and the facial region are seriously destroyed. The nonlocal-SO method can reserve image structures and edges. However, it over-smooths image textures and eliminates details (see the wrinkles on hat in Fig. 2c). From the restored image (Fig. 2d) using our method, we note that our method reduces more multiplicative noise and better preserves textures and details than other methods-the curly hair and the wrinkles on hat can be well distinguished. In Fig. 3, the 200th lines of the clean, noisy, and deblurred images that occur in Fig. 2 are presented. We observe that the line constructed by our method (Fig. 3d) is closer to the original line than those restored by the other methods.

Fig. 2
figure 2

Results of different models on the Lena image degraded by a Gamma noise of level with σ = 0.05. a Noisy image, b nonlocal-AA, c nonloca-SO, and d proposed method

Fig. 3
figure 3

The 200th lines of the clean, noisy, and deblurred images of different methods. a Noise and clean lines. b Deblurred line by nonlocal-AA method. c Deblurred line by nonlocal SO-method. d Deblurred line by the proposed method

To highlight the competitive visual performance of our proposed model for multiplicative noise removal, parts of the denoised images Cameraman and Woman are presented to measure the texture-preserving property of different models in Figs. 4 and 5, respectively. To further evaluate the performance of noise reducing and texture preserving, we also present the method noise images that are the difference between original image and denoised images in Figs. 4 and 5. The nonlocal-AA results contain numerous white point and staircase effects in the smooth areas (Figs. 4b and 5b). The residual noise and the lost textures in the method noise images of nonlocal-AA method (Figs. 4h and 5h) reveal that the nonlocal-AA method provided limited noise suppression and detail damage. The nonlocal-SO method (Figs. 4c and 5c) reduces the staircase effect and effectively removes noise (see smooth region Figs. 4c and 5c), but it blurs image edges and textures (camera’s edges in Fig. 4c and the hood’s texture in Fig. 5c are barely visible). Our proposed method exhibits the best visual appearance among all the NLTV-based methods so that the camera’s edges in Fig. 4d and the hood’s texture in Fig. 5d are clearly visible. Additionally, comparing the method noise images in Figs. 4 and 5, we can observe that our proposed method produces less residual noise and preserves more textures than other methods. The results shown in Figs. 4 to 5 let us conclude that our proposed model removes multiplicative noise while simultaneously reconstructing more image textures and details than the other model.

Fig. 4
figure 4

Zoomed version of the denoised image Cameraman of different methods degraded by a Gamma noise with variance 0.1. a Noisy image, b nonlocal-AA method, c nonlocal-SO method, d proposed method, e noise image, f method noise image of nonlocal-AA method, g method noise image of nonlocal-SO method, and h method noise image of proposed method

Fig. 5
figure 5

Zoomed version of the denoised image Woman of different methods degraded by a Gamma noise with variance 0.02. a Noisy image, b nonlocal-AA method, c nonlocal-SO method, d proposed method, e noise image, f method noise image of nonlocal-AA method, g method noise image of nonlocal-SO method, and h method noise image of proposed method

4.3 Results on images acquired by coherent imaging system

Since multiplicative Gamma noise often occurs in the coherent imaging systems, we compare the performance of our proposed model with other models on more complicated images acquired by coherent imaging technique where it is not easy to discern the foreground from the background. In this section, we use ultrasonic image, OCT image, and SAR image to verify the effectiveness of our proposed method.

Figure 6 shows visual comparisons of denoised images processed by different NLTV-based methods. The original ultrasonic image (resolution is 256 × 256 × 8) is added multiplicative Gamma noise with variance 0.1. Figure 4c–g respectively shows the denoised images using nonlocal-AA method, nonlocal-SO method, and our method. A texture region marked by red-lined box is selected for visual comparison. Observing the edges and details in the red-lined box, similar effects appeared in the above examples are regarded here. The nonlocal-AA method cannot adequately remove the multiplicative Gamma noise. Some residual noise exists in the denoised image and the artifacts obscure the edges. The nonlocal-SO method shows better performance than the nonlocal-AA method, but some edges in the red-lined box are seriously obscured or invisible. Our proposed method attains almost the highest PSNR and SSIM values and exhibits the best visual quality among all the methods, allowing the edges in the red-lined box to clearly recovered and the residual noise is difficult to find.

Fig. 6
figure 6

Recovered ultrasonic images via different methods. a Original image, b noisy image (PSNR = 29.2360, SSIM = 0.8174), c nonlocal-AA method (PSNR = 31.2127,SSIM = 0.8488), d nonlocal-SO method (PSNR = 32.3298, SSIM = 0. 8878), and e proposed method (PSNR = 32.6496, SSIM = 0.8994)

Moreover, we use OCT (resolution is 128 × 128 × 8) and SAR (resolution is 128 × 128 × 8) images, which are contaminated by multiplicative Gamma noise with, respectively, variance 0.1 and 0.08, to further verify the effectiveness of our proposed method. To distinguish the differences in edge preservation and texture contrast enhancement between the different methods, a square region that contains salient edge and complicated texture is selected for enlarging views. The original and denoised images and the zoomed versions of the selected region marked with a red box are presented in Figs. 7 and 8. The noisy images (Figs. 7b and 8b) show that speckle noise reduces the image visual quality, resulting in barely-visible textures and blurred edges. The results of the nonlocal-AA method (Figs. 7c and 8c) cannot effectively reduce noise or obscure edges. The results of the nonlocal-SO method (Figs. 7d and 8d) eliminate the artificial effect and improve the properties of nonlocal-AA method, but blurs still exist in the denoised image, especially in the edges (see the zoomed version of (Figs. 7d and 8d). Since our method makes full use of prior information for noise and is an extension of the nonlocal-SO method, it is more effective in removing the noise and preserving image details than the nonlocal-SO method. The textures deblurred by our method (Figs. 7e and 8e) are clearer and more distinct than the textures deblurred by the nonlocal-SO method. Additionally, our proposed method obtains larger PSNR and smaller MSE values the other methods, again indicating that the superiority of our proposed method in removing multiplicative Gamma noise is very appropriate for complicated images.

Fig. 7
figure 7

Recovered OTC images and zoomed square region marked by red-lined boxes via different methods. a Original image, b noisy image (PSNR = 20.8433, SSIM = 0.6984), c nonlocal-AA method (PSNR = 21.5102, SSIM = 0.7621), d nonlocal-SO method (PSNR = 22.0345, SSIM = 0.8315), and e proposed method (PSNR = 22.4348, SSIM = 0.8879)

Fig. 8
figure 8

Recovered SAR images and zoomed square region marked by red-lined boxes via different methods. a Original image, b noisy image (PSNR = 24.4431, SSIM = 0.7095), c nonlocal-AA method (PSNR = 25.9805, SSIM = 0.7919), d nonlocal-SO method (PSNR = 26.1081, SSIM = 0.8423), and e proposed method (PSNR = 27.0234, SSIM = 0.8513)

5 Conclusion

This study utilizes prior information and proposes a strictly convex NLTV-based multiplicative noise removal model based on the maximum prior estimate framework. Based on the split Bregman iteration algorithm, we design an efficient alternating minimization iteration to optimize our proposed NLTV model. We also prove that the alternative minimization iteration converges to a fixed point, which is the unique solution of the original minimization problem. Finally, results compared with related NLTV-based multiplicative noise removal models indicate that our proposed NLTV method effectively removes multiplicative noise and outperforms other related NLTV models.

The proposed method is suitable for multiplicative noise removal and successfully implements the coherent imaging system. However, a large number of predefined constants and parameters involved in the alternative iteration algorithm, values of these constants, as well as their parameters are important factors influencing the denoising result of the proposed method. In our experiment, these values are manually set. In later works, adaptively adjusting these parameters to obtain better denoising results will be a future research direction.

It is worth mentioning that the proposed method cannot be directly applied to other types of noise removal problems, such as mixed noise. For example, in the electronic microscopy imaging system, the captured images are usually contaminated by Gaussian and Poisson noises, which are combined as a superposition. Future research is required to make use of NLTV for different types noise removing, especially for mixed noise. On the other hand, the proposed method can be successfully implemented on video sequences, which is not present in this paper due to space limitations. However, we have simply focused on utilizing the correlation information in a single image for noise removal and have not considered similar content and correlation information in different images. There is a great correlation and a large number of redundant information existing between the adjacent frames in the video sequences. Accordingly, utilizing similar and redundant information in the video sequences to improve our proposed method is another direction for future research.

Abbreviations

AA:

Aubert and Aujol

HNW:

Huang, Ng, and Wen

MAP:

Maximum a posteriori estimator

NLM:

Nonlocal means filter

NLTV:

Nonlocal total variation

OCT:

Optical coherence tomography

PSNR:

Peak signal-to-noise ratio

SAR:

Synthetic aperture radar

SO:

Shi and Osher

SSIM:

Structural similarity index

TV:

Total variation

References

  1. A.N. Tikhonov, V.Y. Arsenin, Solution of ill-posed problems[J]. Math. Comput. 32(144), 491–491 (1977).

    Google Scholar 

  2. R. Acar, C.R. Vogel, Analysis of bounded variation penalty methods for ill-posed problems[J]. Inverse Problems 10(6), 1217–1229 (1997).

    Article  MathSciNet  Google Scholar 

  3. L.I. Rudin, S. Osher, E. Fatemi, Nonlinear total variation based noise removal algorithms[J]. Physica D Nonlinear Phenomena 60(1–4), 259–268 (1992).

    Article  MathSciNet  Google Scholar 

  4. T. Chan, A. Marquina, P. Mulet, High-order total variation-based image restoration[J]. SIAM J. Sci. Comput. 22(2), 503–516 (2000).

    Article  MathSciNet  Google Scholar 

  5. A. Beck, M. Teboulle, Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems.[J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society 18(11), 2419–2434 (2009).

    Article  MathSciNet  Google Scholar 

  6. A. Chambolle, An Algorithm for Total Variation Minimization and Applications. J. Math. Imaging. Vision. 20(12), 89-97 (2004).

  7. N.B. Bras, J. Bioucasdias, R.C. Martins, et al., An alternating direction algorithm for total variation reconstruction of distributed parameters[J]. IEEE Trans. Image Process. 21(6), 3004–3016 (2012).

    Article  MathSciNet  Google Scholar 

  8. G. Li, X. Huang, S.G. Li, Adaptive Bregmanized total variation model for mixed noise removal[J]. AEU - International Journal of Electronics and Communications 80, 29–35 (2017).

    Article  Google Scholar 

  9. L. Zhu, W. Wang, J. Qin, et al., Fast feature-preserving speckle reduction for ultrasound images via phase congruency[J]. Signal Process. 134, 275–284 (2017).

    Article  Google Scholar 

  10. S. Adabi, S. Conforto, A. Clayton, et al., An intelligent speckle reduction algorithm for optical coherence tomography images[C]// international conference on photonics, optics and laser technology. IEEE 4, 38–43 (2017).

  11. N. Tabassum, A. Vaccari, S. Acton, Speckle removal and change preservation by distance-driven anisotropic diffusion of synthetic aperture radar temporal stacks [J]. Digital Signal Processing 74, 43–55 (2018).

    Article  MathSciNet  Google Scholar 

  12. L. Denis, F. Tupin, J. Darbon, et al., SAR image regularization with fast approximate discrete minimization.[J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society 18(7), 1588 (2009).

    Article  MathSciNet  Google Scholar 

  13. S. Setzer, G. Steidl, T. Teuber, Deblurring Poissonian images by split Bregman techniques[J]. Journal of Visual Communication & Image Representation 21(3), 193–199 (2010).

    Article  Google Scholar 

  14. L. Rudin, P.L. Lions, S. Osher, Multiplicative denoising and deblurring: theory and algorithms[M]// geometric level set methods in imaging, vision, and graphics (Springer, New York, 2003), pp. 103–119.

    Google Scholar 

  15. G. Aubert, J.F. Aujol, A variational approach to removing multiplicative noise[J]. SIAM J. Appl. Math. 68(4), 925–946 (2008).

    Article  MathSciNet  Google Scholar 

  16. J. Shi, S. Osher, A nonlinear inverse scale space method for a convex multiplicative noise model[J]. Siam Journal on Imaging Sciences 1(3), 294–321 (2008).

    Article  MathSciNet  Google Scholar 

  17. Y.M. Huang, M.K. Ng, Y.W. Wen, A new Total variation method for multiplicative noise removal[J]. Siam Journal on Imagingences 2(1), 20–40 (2009).

    Article  MathSciNet  Google Scholar 

  18. Y. Dong, T. Zeng, A convex variational model for restoring blurred images with multiplicative noise[J]. Siam Journal on Imaging Sciences 6(3), 1598–1625 (2013).

    Article  MathSciNet  Google Scholar 

  19. J. Lu, L. Shen, C. Xu, et al., Multiplicative noise removal in imaging: an exp-model and its fixed-point proximity algorithm[J]. Applied & Computational Harmonic Analysis 41(2), 518–539 (2016).

    Article  MathSciNet  Google Scholar 

  20. S. Fu, C. Zhang, Adaptive non-convex total variation regularization for image restoration[J]. Elect. Lett. 46(13), 907–908 (2010).

  21. L.A. Vese, S.J. Osher, Modeling textures with total variation minimization and oscillating patterns in image processing[J]. J. Sci. Comput. 19(1–3), 553–572 (2003).

    Article  MathSciNet  Google Scholar 

  22. A. Buades, B. Coll, J.M. Morel, A review of image denoising algorithms, with a new one[J]. Siam Journal on Multiscale Modeling & Simulation 4(2), 490–530 (2005).

    Article  MathSciNet  Google Scholar 

  23. M. Chen, H. Zhang, G. Lin, et al., A new local and nonlocal total variation regularization model for image denoising[J]. Clust. Comput. 6, 1–17 (2018).

    Google Scholar 

  24. X. Nie, X. Huang, W. Feng, A new nonlocal TV-based variational model for SAR image despeckling based on the G0, distribution[J]. Digital Signal Processing 68, 44–56 (2017).

    Article  Google Scholar 

  25. J. Li, Q. Yuan, H. Shen, et al., Hyperspectral image recovery employing a multidimensional nonlocal total variation model[J]. Signal Process. 111(C), 230–248 (2015).

    Article  Google Scholar 

  26. L. Shuai, X. Zhao, G. Wang, Nonlocal TV model for multiplicative noise with Rayleigh distribution removal[J]. Chinese Journal of Scientific Instrument 36(7), 1570–1576 (2015).

    Google Scholar 

  27. F. Dong, H. Zhang, D.X. Kong, Nonlocal total variation models for multiplicative noise removal using split Bregman iteration[J]. Mathematical & Computer Modelling 55(3), 939–954 (2012).

    Article  MathSciNet  Google Scholar 

  28. G. Gilboa, S. Osher, Nonlocal operators with applications to image processing[J]. Siam Journal on Multiscale Modeling & Simulation 7(3), 1005–1028 (2008).

    Article  MathSciNet  Google Scholar 

  29. W. Li, Q. Li, W. Gong, et al., Total variation blind deconvolution employing split Bregman iteration[J]. Journal of Visual Communication & Image Representation 23(3), 409–417 (2012).

    Article  MathSciNet  Google Scholar 

  30. R.Q. Jia, H. Zhao, W. Zhao, Convergence analysis of the Bregman method for the variational model of image denoising[J]. Applied & Computational Harmonic Analysis 27(3), 367–379 (2009).

    Article  MathSciNet  Google Scholar 

  31. J.F. Cai, S. Osher, Z. Shen, Convergence of the linearized Bregman iteration for ℓ1-norm minimization[J]. Math. Comput. 78(268), 2127–2136 (2009).

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.

Funding

This research was supported by the Open Fund Project of the Artificial Intelligence Key Laboratory of Sichuan Province (Grant no. 2016RYY02), and the Scientific Research Project of Sichuan University of Science and Engineering (Grant no. 2018RCL17 and no. 2015RC16).

Availability of data and materials

Request for authors.

Author information

Authors and Affiliations

Authors

Contributions

All authors take part in the discussion of the work described in this paper. The author MC conceived the idea, optimized the model, and did the experiments of the paper. HZ, QH, and CH were involved in the extensive discussions and evaluations, and all authors read and approved the final manuscript.

Corresponding author

Correspondence to Hua Zhang.

Ethics declarations

Authors’ information

Mingju Chen (1982-) received the M.S. degree in College of Communication Engineering from Chongqing University of Posts and Telecommunications in 2007. He is currently pursuing the Ph.D. degree in Southwest University of Science and Technology. His research interests include machine vision inspection systems and image processing.

Hua Zhang (1969-) received his PhD degree in College of Communication Engineering from Chongqing University in 2006. He is currently a professor in School of Information Engineering of Southwest University of Science and Technology. His research interests include nuclear detection technology, robot technology, and machine vision inspection systems.

Qiang Han(1987-) received the B.S degree from Ocean university of China in 2010, and M.S. degree from Sichuan University of Science and Engineering in 2013. Now, he is currently pursuing his PhD degree in Southwest University of Science and Technology. His current research interests include consensus and coordination in multi-agent systems, networked control system theory, and its application.

Chencheng Huang(1984-) received BS degree in applied mathematics from Shijiazhuang Tiedao University in 2007, Master degree in applied mathematics from Chongqing University in 2011, and a PhD degree from Chongqing University in 2015. He is currently a lecturer with School of Automation and Information Engineering of Sichuan University of Science and Engineering. His research interests are image processing.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, M., Zhang, H., Han, Q. et al. A convex nonlocal total variation regularization algorithm for multiplicative noise removal. J Image Video Proc. 2019, 28 (2019). https://doi.org/10.1186/s13640-019-0410-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-019-0410-2

Keywords