In our study, we obtain a strictly convex NLTV model for multiplicative noise removal and employ Bregman iteration to optimize it.
The proposed model
Assume the prior information about the mean and variance of the multiplicative noise are known in advance, that is,
$$ \frac{1}{N}\int \eta =1, $$
(13)
$$ \frac{1}{N}\int {\left(\eta -1\right)}^2={\sigma}^2 $$
(14)
where N = ∫ 1. The mean of the noise is 1 and the variance equals σ2. In the aforementioned approaches, such as (10, 11, 12), only the density function of the noisy image is exploited for the MAP estimation to derive the minimization problem. The two previous constraints (mean and variance) are introduced into the nonlocal-AA model, and we can improve the NLTV model and obtain the following constrained optimization problem:
$$ u=\arg \kern0.2em \underset{u}{\min}\left\{\left|\nabla {u}_{NL}\right|+\lambda \left(\log u+\frac{f}{u}\right)\right\}\kern0.6em s.t\kern0.3em \left(\kern0.1em \frac{1}{N}\int \frac{f}{u}=1\kern0.4em \mathrm{and}\kern0.3em \frac{1}{N}\int {\left(\frac{f}{u}-1\right)}^2={\sigma}^2\right)\kern0.4em . $$
(15)
Our goal is to solve the above equality constrained minimization problem (15). The constrained optimization problem is converted into an unconstrained formula:
$$ u=\arg \kern0.2em \underset{u}{\min}\left\{\left|\nabla {u}_{NL}\right|+\left({\lambda}_1\int \log u+\frac{f}{u}+{\lambda}_2\int \frac{f}{u}+{\lambda}_3\int {\left(\frac{f}{u}-1\right)}^2\right)\right\}\kern0.7em . $$
(16)
The above constraint minimization problem can be simplified as
$$ u=\arg \kern0.2em \underset{u}{\min}\left\{\left|{\nabla}_{NL}u\right|+\lambda \int \left(a\frac{f}{u}+\frac{b}{2}{\left(\frac{f}{u}\right)}^2+c\log u\right)\right\}\kern0.7em $$
(17)
where a, b, and c are constrained parameters that are greater than 0. \( H(u)=\lambda \left(a\frac{f}{u}+\frac{b}{2}{\left(\frac{f}{u}\right)}^2+c\log u\right) \) is the fidelity term, which is continuous. We obtain
$$ \frac{\partial H(u)}{\partial u}=a\frac{f}{u^2}+b\frac{f^2}{u^3}-c\frac{1}{u}. $$
(18)
The initial data satisfy u(0) = f, and H(u) has a minimum at u = u(0). We can obtain c = a + b.
Unfortunately, this model is nonconvex. Similarly, we use the variable z = log u and, by replacing the regularizer |∇NLu| with |∇NLz|, we further convert (17) into the following minimization problem:
$$ u=\arg \kern0.3em \underset{z}{\min }{E}_1(z)=\arg \kern0.3em \underset{z}{\min}\left|{\nabla}_{NL}z\right|+\lambda \sum \left({afe}^{-z}+\frac{b}{2}{f}^2{e}^{-2z}+\left(a+b\right)z\right). $$
(19)
Bregman iteration for the proposed model
The above constraint (19) can be optimized by the iteration-based multivariable minimization algorithm. Note that the fidelity term in (19) contains exponential forms. Thus, we introduce an auxiliary variable p(p = z) to split the problem into subproblems that are easier to solve. Equation (19) is then rewritten as the following constrained problem:
$$ u=\arg \kern0.3em \underset{z,p}{\min }{E}_2=\arg \kern0.3em \underset{z,p}{\min}\left|{\nabla}_{NL}p\right|+\lambda \sum \left({afe}^{-z}+\frac{b}{2}{f}^2{e}^{-2z}+\left(a+b\right)z\right),\kern1em s.t.\kern0.7em p=z. $$
(20)
The minimization problem (20) is shown to be equivalent to (19). The constrained minimization function (20) can be transformed to the following unconstrained, multi-variable optimization function:
$$ u=\arg \kern0.3em \underset{z,p}{\min }{E}_{2\mu }=\arg \kern0.3em \underset{z,p}{\min}\left|{\nabla}_{NL}p\right|+\frac{\mu }{2}{\left\Vert p-z\right\Vert}_2^2+\lambda \sum \left({afe}^{-z}+\frac{b}{2}{f}^2{e}^{-2z}+\left(a+b\right)z\right) $$
(21)
There are two variables in the regularization function (21). Inspired by the core ideas of the split Bregman algorithm, we use the alternating minimization scheme and obtain the following two subproblems:
$$ \Big\{{\displaystyle \begin{array}{l}{\min}_p{E}_{2p}(p)=\parallel {\nabla}_{NL}p{\parallel}_1+\frac{\mu }{2}\parallel p-z{\parallel}_2^2\\ {}{\min}_z{E}_{2z}(p)=\frac{\mu }{2}\parallel p-z{\parallel}_2^2+\lambda \sum \left(\left({afe}^{-z}+\frac{b}{2}{f}^2{e}^{-2z}+\left(a+b\right)z\right)\right)\end{array}}\operatorname{}. $$
(22)
To solve the p subproblem, ∇NLp is replaced with d and the constraint is forced using the Bregman iteration process as follows:
$$ \left({p}^{k+1},{d}^{k+1}\right)=\arg \underset{p,d}{\min}\left(\left\Vert {d}_1\right\Vert +\frac{\gamma }{2}{\left\Vert d-{\nabla}_{NL}p-{b}^k\right\Vert}_2^2+\frac{\mu }{2}{\left\Vert p-z\right\Vert}_2^2\right), $$
(23)
$$ {b}^{k+1}={b}^k+{\nabla}_{NL}{p}^{k+1}-{d}^{k+1} $$
(24)
where b is an auxiliary variable. The solution of (23) is obtained by performing the following alternative minimization subproblems:
$$ {p}^{k+1}=\arg \kern0.3em \underset{p}{\min}\frac{\gamma }{2}{\left\Vert {d}^k-{\nabla}_{NL}p-{b}^k\right\Vert}_2^2+\frac{\mu }{2}{\left\Vert p-{z}^k\right\Vert}_2^2, $$
(25)
$$ {d}^{k+1}=\arg \kern0.3em \underset{d}{\min}\left\Vert {d}_1\right\Vert +\frac{\gamma }{2}{\left\Vert d-{\nabla}_{NL}{p}^{k+1}-{b}^k\right\Vert}_2^2. $$
(26)
To minimize function (25) by gradient descent, we derive the following optimality equation for pk + 1
$$ -\gamma {\mathit{\operatorname{div}}}_{NL}\left({d}^k-{\nabla}_{NL}p-{b}^k\right)-\mu \left(p-{z}^k\right)=0. $$
(27)
Using a Gauss-Seidel iterative scheme, pk + 1 is represented as
$$ {p}_i^{k+1}=\frac{1}{\mu +\gamma \sum {w}_{ij}}\left(r\sum {w}_{ij}{p}_j^k+\mu {z}_i^k-\varphi \sum \sqrt{w_{ij}}\left({d}_{ij}^k-{b}_{ij}^k-{d}_{ji}^k+{b}_{ji}^k\right)\right). $$
(28)
To solve dk + 1, we use the soft-shrinkage formula [29] as follows
$$ {d}^{k+1}=\frac{\nabla_{NL}{p}^{k+1}+{b}^k}{\left|{\nabla}_{NL}{p}^{k+1}+{b}^k\right|}\max \left(\left|{\nabla}_{NL}{p}^{k+1}+{b}^k\right|-\frac{1}{\gamma },0\right). $$
(29)
Optimizing zk + 1 is equivalent to solving the following Euler-Lagrange equation:
$$ \mu \left(z-{p}^{k+1}\right)+\lambda \left(\left(a+b\right)-{afe}^{-z}-{bf}^2{e}^{-2z}\right)=0. $$
(30)
The Newton method is used to yield a fast solution:
$$ {z}^{k+1}={z}^k-\frac{\mu \left(z-{p}^{k+1}\right)+\lambda \left(\left(a+b\right)-{afe}^{-z}-{bf}^2{e}^{-2z}\right)}{u+\lambda \left({afe}^{-z}+2{bf}^2{e}^{-2z}\right)}. $$
(31)
All of these equations are combined and summarized in the algorithm that follows:
Bregman iteration for NLTV minimization
Initialization: u0 = log f, p0 = z0, b0 = d0 = 0, k = 0 and λ, μ, γ, tol
While ‖zk + 1 − zk‖2/‖zk‖2 > tol
$$ {p}_i^{k+1}=\frac{1}{\mu +\gamma \sum {w}_{ij}}\left(r\sum {w}_{ij}{p}_j^k+\mu {z}_i^k-\varphi \sum \sqrt{w_{ij}}\left({d}_{ij}^k-{b}_{ij}^k-{d}_{ji}^k+{b}_{ji}^k\right)\right), $$
$$ {d}^{k+1}=\frac{\nabla_{NL}{p}^{k+1}+{b}^k}{\left|{\nabla}_{NL}{p}^{k+1}+{b}^k\right|}\max \left(\left|{\nabla}_{NL}{p}^{k+1}+{b}^k\right|-\frac{1}{\gamma },0\right), $$
$$ {b}^{k+1}={b}^k+{\nabla}_{NL}{p}^{k+1}-{d}^{k+1}, $$
$$ {z}^{k+1}={z}^k-\frac{\mu \left(z-{p}^{k+1}\right)+\lambda \left(\left(a+b\right)-{afe}^{-z}-{bf}^2{e}^{-2z}\right)}{\mu +\lambda \left({afe}^{-z}+2{bf}^2{e}^{-2z}\right)}, $$
End
Convergence analysis
We first analyze the convexity of the objective function to simplify our proof for the convergence of the minimization iteration schemes of our proposed model. We then prove that the sequence generated by the alternative iteration scheme converges to the minimum point of (21).
For the transformation z = log u, it is obvious that the second derivative of the fidelity term in (21) is af exp(−z) + bf2 exp(−2z), which is always greater than zero. Therefore, this term is strictly convex in z.Next, we prove that the first term |∇NLz| is also convex.
Assuming ∀k1, k2 > 0, k1 + k2 = 1, ∀z1, z2, we obtain
$$ {\displaystyle \begin{array}{l}\left|{\nabla}_{NL}\left({k}_1{z}_1+{k}_2{z}_2\right)\right|={\left(\sum \limits_j{\left({k}_1{z}_{1j}+{k}_2{z}_{2j}-{k}_1{z}_{1i}-{k}_2{z}_{2i}\right)}^2{w}_{ij}\right)}^{\frac{1}{2}}\\ {}\kern6.099997em \le {k}_1{\left(\sum \limits_j{\left({z}_{1j}-{z}_{1i}\right)}^2{w}_{ij}\right)}^{\frac{1}{2}}+{k}_2{\left(\sum \limits_j{\left({z}_{2j}-{z}_{2i}\right)}^2{w}_{ij}\right)}^{\frac{1}{2}}.\\ {}\kern5.999997em ={k}_1{\left|{\nabla}_{NL}{z}_1\right|}_i+{k}_2{\left|{\nabla}_{NL}{z}_2\right|}_i\\ {}\kern6.099997em ={k}_1\left|{\nabla}_{NL}\left({k}_1{z}_1\right)\right|+{k}_2\left|{\nabla}_{NL}\left({k}_2{z}_2\right)\right|\end{array}} $$
(32)
Since the first term is also convex, (21) is strictly convex for all z. We obtain the denoised image z∗ by minimizing the function and obtaining the global minimum point. We prove that the above alternative optimization subproblem algorithms converge to the global minimum point. Some fundamental criteria and properties of the alternative iteration minimum that are used to provide the convergence are displayed in [30, 31]. The alternative optimization subproblems are defined as
$$ {p}^{k+1}=S\left({z}^k\right)=S\left(R\left({p}^k\right)\right), $$
(33)
$$ {z}^{k+1}=R\left({p}^{k+1}\right)=R\left(S\left({z}^k\right)\right). $$
(34)
E2μ(z, p) is convex and separately differentiable with respect to z and p. Suppose the unique minimizer of E2μ(z, p) is \( \left(\tilde{z},\tilde{p}\right) \). We note that
$$ \left\{\begin{array}{c}\frac{\partial {E}_{2\mu}\left(\tilde{z},\tilde{p}\right)}{\partial z}=0\\ {}\frac{\partial {E}_{2\mu}\left(\tilde{z},\tilde{p}\right)}{\partial p}=0\end{array}\right.. $$
(35)
This implies that \( \left(\tilde{z},\tilde{p}\right) \) is the minimizers of E2μ. This signifies that \( \tilde{z}=R\left(\tilde{p}\right)=R\left(S\left(\tilde{z}\right)\right) \) and \( \tilde{p}=S\left(\tilde{z}\right)=S\left(R\left(\tilde{p}\right)\right) \). Therefore, \( \tilde{z} \) and \( \tilde{p} \) are the fixed points.
Since R(S(·)) is an alternative to the minimizer of E2μ(z, p) and R(S(·)) is convex and nonexpansive, we obtain
$$ \left\Vert {z}^{k+1}-\tilde{z}\right\Vert =\left\Vert R\left(S\left({z}^k\right)\right)-R\left(S\left(\tilde{z}\right)\right)\right\Vert =\left\Vert R\left(S\left({z}^k\right)\right)-\tilde{z}\right\Vert \le \left\Vert {z}^k-\tilde{z}\right\Vert . $$
(36)
This implies that \( \left\Vert {z}^k-\tilde{z}\right\Vert \) is monotonically decreasing (note that zk is a bounded sequence). Therefore, we deduce that zk converges to a limit point \( \widehat{z} \), such that
$$ \underset{k\to \infty }{\lim }{z}^k=\widehat{z}. $$
(37)
Similarly, we obtain pk, which converges to a limit point
$$ \underset{k\to \infty }{\lim }{p}^k=\widehat{p}. $$
(38)
The denoised image z∗ is the unique minimizer of the constraint problem E1(z). Let p∗ = z∗, p∗ and z∗ represent the minimizers of the constraint problem E2(z, p). Suppose subsequence \( \left\{{z}^{k_j}\right\}\subseteq {\left\{{z}^k\right\}}_{k=1}^{\infty } \) and \( \left\{{p}^{k_j}\right\}\subseteq {\left\{{p}^k\right\}}_{k=1}^{\infty } \) are convergent, which is a solution to minimize the energy of E2μ(z, p). The following inequality is combined as
$$ {E}_{2\mu}\left({z}^{k_j},{p}^{k_j}\right)={E}_2\left({z}^{k_j}\right)+\frac{\mu }{2}{\left\Vert {p}^{k_j}-{z}^{k_j}\right\Vert}_2^2\le {E}_{2\mu}\left({z}^{\ast },{p}^{\ast}\right)={E}_2\left({z}^{\ast },{p}^{\ast}\right). $$
(39)
When kj → ∞ ,
$$ {\left\Vert \widehat{p}-\widehat{z}\right\Vert}_2^2\le \frac{2}{\mu}\left({E}_2\left(\widehat{z},\widehat{p}\right)-{E}_2\left({z}^{\ast },{p}^{\ast}\right)\right). $$
(40)
Since μ > 0, then
$$ {E}_2\left(\widehat{z},\widehat{p}\right)\le {E}_2\left({z}^{\ast },{p}^{\ast}\right). $$
(41)
Since z∗,p∗ is the unique solution to the minimization function E2(z, p), we can deduce
$$ {E}_2\left(\widehat{z},\widehat{p}\right)={E}_2\left({z}^{\ast },{p}^{\ast}\right). $$
(42)
Hence, Eq. (40) can be expressed as
$$ \underset{k\to \infty }{\lim }{\left\Vert \widehat{p}-\widehat{z}\right\Vert}_2^2=0. $$
(43)
This implies
$$ \widehat{p}=\widehat{z}. $$
(44)
Combining equations (37, 42, 44), we conclude that
$$ \underset{k\to \infty }{\lim }{z}^k=\widehat{z}={z}^{\ast }. $$
(45)
In Eq. (45), we conclude zk converges to z∗, which is the unique minimizer of E1(z).