Open Access

A new nonlocal variational bi-regularized image restoration model via split Bregman method

EURASIP Journal on Image and Video Processing20152015:15

https://doi.org/10.1186/s13640-015-0072-7

Received: 12 November 2014

Accepted: 19 May 2015

Published: 6 June 2015

Abstract

In this paper, we propose a new variational model for image restoration by incorporating a nonlocal TV regularizer and a nonlocal Laplacian regularizer on the image. The two regularizing terms make use of nonlocal comparisons between pairs of patches in the image. The new model can be seen as a nonlocal version of the CEP- L 2 model. Subsequently, an algorithm combining the alternating directional minimization and the split Bregman iteration is presented to solve the new model. Numerical results verified that the proposed method has better performance for image restoration than CEP- L 2 model, especially for low noised images.

Keywords

Image restoration Nonlocal Split Bregman Bi-regularized variational functional

1 Introduction

Variational and PDE-based image restoration methods play an important role in image processing. Their goal is to recover an image u from the noisy version f. This is a typical example of inverse problem. The classical way to overcome inverse problem is to use regularization techniques. That is to say, images can be reconstructed by means of minimizing a variational energy functional. Many techniques for image restoration based on energy minimization have been presented [15]. The total variation (TV) regularization has become a well-known model in inverse problems because it enables sharp edges and fine details to be recovered.

The TV model proposed by Rudin et al. [1] is formulated as:
$$\min\limits_{u}\int_{\Omega}\left[|\nabla u|+\lambda (u-f)^{2}\right]dx $$
Here, f:ΩR represents a noisy gray scale image. u is the recovered image. λ>0 is a tuning parameter. The first term in the energy is a regularizing term, and the second one is a fidelity term. Existence and uniqueness results for this minimization problem can be found in [1]. The Euler-Lagrange equation for the TV minimization is (formally)
$$u=f+\frac{1}{2\lambda}\text{div}\left(\frac{\nabla u}{|\nabla u|}\right) $$

The model works very well for image denoising, deblurring, and decomposition. However, it cannot completely separate the cartoon part from the textural part and also produces staircase effects. There have existed plenty of variants and numerical attempts to overcome the problem.

In [4], Meyer proposed a space of G to model the textural patterns, which is in some sense the dual of BV space. We would also like to refer the reader to [4]. Meyer’s model is
$$\inf\limits_{u}\left\{\int_{\Omega}|\nabla u|dx+\lambda \|v\|_{G}, f=u+v\right\} $$
Where the space G denotes the Banach space consisting of the functions with
$$v=\text{div}(\overrightarrow{g}), \overrightarrow{g}=(g_{1}, g_{2})\in L_{\infty}(\Omega)\times L_{\infty}(\Omega) $$

The norm of G is defined as the lower bound of all L norms of \(|\overrightarrow {g}|\). This model is suitable to capture texture, however, it is not easy to handle in practice because the G norm is involved a term coming from an L norm. Subsequently, several related problems approximating the Meyer’s model are introduced in [57].

Vese and Osher [5, 6] proposed the following image restoration model as an approximation of Meyer’s model:
$$ \inf\limits_{u, v}\left\{\int_{\Omega}|\nabla u|dx+\lambda \|f-u-v\|_{2}^{2}+\mu\|v\|_{L_{p}}\right\} $$
(1)

In this minimization problem, the term v G is approximated by \(\|v\|_{L_{p}}\) when p.

Osher et al. [7] investigated a simplified and modified version corresponding to the case p=2 and λ in (1). We call it the OSV model:
$$\inf\limits_{u}\left\{\int_{\Omega}|\nabla u|dx+|f-u|^{2}_{H^{-1}}\right\} $$

Where the semi-norm \(|v|_{H^{-1}}\phantom {\dot {i}\!}\) is defined by \(|v|^{2}_{H^{-1}}=\int _{\Omega }|\nabla \triangle ^{-1}v|^{2}\). The model can be solved by the steepest descent method efficiently.

TV regularization technique is used in the aforementioned methods for image restoration to preserve edge. However, the use of the classical TV norm in the functional causes staircase effects in the smooth regions. One way of reducing staircasing in image restoration from TV regularization is to combine higher-order derivatives. Chambolle and Lions (CL) incorporated higher-order derivatives into the image restoration model [8]. Thus, the minimization problem is:
$${} {\small{\begin{aligned} \inf\limits_{u_{1}, u_{2}}\left\{\int_{\Omega}|\nabla u_{1}|dx+\alpha \int_{\Omega}|\partial^{2}u_{2}|dx+\lambda\int_{\Omega}\left(\,f-u_{1}-u_{2}\right)^{2}dx\right\} \end{aligned}}} $$
Chan et al. proposed a modified version of the CL model for fast staircase reduction in denoising problems [9]. Specifically, the higher-order derivatives term was replaced by the following energy: \(\int _{\Omega }|\triangle u_{2}|^{2}\). Their model (CEP- L 2) has the following formulation:
$${} {\small{\begin{aligned} \inf\limits_{u_{1}, u_{2}}\!\left\{\!\int_{\Omega}|\nabla u_{1}|dx+\alpha \int_{\Omega}|\triangle u_{2}|^{2}dx+\frac{1}{2\lambda}\int_{\Omega}\!\left(\,f-u_{1}-u_{2}\right)^{2}dx\!\right\} \end{aligned}}} $$

Another kind of denoising model called nonlocal means filter, which is based on the assumption that natural images have mutually similar patches, is attracting more and more attention. Nonlocal means filter was first proposed by Buades et al. [10, 11]. Gilboa and Osher defined a nonlocal variational framework by embedding the nonlocal means into a variational formulation [12, 13]. After the work of Gilboa and Osher, many further research on image processing based on nonlocal variational methods have been obtained. See for example [1419]. Meanwhile, many nonlocal regularizing models were proposed, for example, the nonlocal TV model, the nonlocal H 1 model, the nonlocal Meyer’s model [13], and the nonlocal OSV model [19].

In this paper, we shall focus on a new model which uses nonlocal TV and nonlocal laplace operator to regular an image, thus, it can effectively exploit the available information of the input image. Then, we formulate a nonlocal variational functional for image restoration which performs adaptive smoothing but also preserves edges. The rest of the paper is organized as follows. Section 2 recalls some results on nonlocal operators and split Bregman method. The proposed variational bi-regularized model for image restoration and its split Bregman algorithm are presented in Section 3. In Section 4, we demonstrate the experimental results on natural images and textured images which show the validity of the new model, and the conclusion is given in Section 5.

2 Preliminaries

2.1 Nonlocal operators

Nonlocal means filter was introduced by Buades et al. for image denosing, which takes advantage of the self similarity of the images [10, 11]. The idea is to restore an unknown pixel using other similar pixels. The nonlocal means filter is effective to deal with textures and fine details. Kindermann et al. used variational methods to understand the nonlocal means filter [20]. Gilboa-Osher introduced nonlocal operators to interpret the nonlocal means filter and formalized a systematic and coherent variational framework for nonlocal operators [13, 14].

In this paper, it is interesting to interpret the nonlocal means filter from the view of variational methods. We introduce some definitions and notations regarding nonlocal operators which are borrowed from Zhou-Scholkopf in [21, 22] and Gilboa-Osher in [12]. Let ΩR 2, x,yΩ and w(x,y) be a weight function which is symmetric and non-negative. For a function u:ΩR, the nonlocal gradient NL u(x,y):Ω×ΩR is defined by
$$\nabla_{\text{NL}}u(x, y)=(u(y)-u(x))\sqrt{w(x, y)}, x, y\in\Omega $$
It is not a vector field in the standard sense but a map from Ω×Ω to R. The map from Ω×Ω to R is called an NL vector. For a pair of NL vectors p and q, the scalar product is defined by
$$\langle p,q \rangle(x)=\int_{\Omega}p(x,y)q(x,y)dy :\Omega \rightarrow R $$
The norm of an NL vector p at x is as the following:
$$ | p|(x)=\sqrt{\langle p,p\rangle(x)}:\Omega\rightarrow R $$
Thus, the norm of the nonlocal gradient of u at x is defined by \(|\nabla _{\text {NL}}u|(x)=\sqrt {\int _{\Omega }(u(y)-u(x))^{2}w(x, y)dy}\). The nonlocal divergence operator divNL v(x):ΩR is defined by
$$\text{div}_{\text{NL}}v(x)=\int_{\Omega}(v(x, y)-v(y,x))\sqrt{w(x, y)}dy $$
which is an adjoint of the nonlocal gradient in the sense that
$${\fontsize{8pt}{12pt}\selectfont{ \begin{aligned} {}\int_{\Omega}\langle \nabla_{\text{NL}}u,v\rangle(x)dx=\,-\,\int_{\Omega}u(x)\text{div}_{\text{NL}}v(x)dx,u&:\Omega\!\rightarrow\! R,v:\Omega\!\times\!\Omega\rightarrow R \end{aligned}}} $$
The nonlocal Laplace of u can now be defined by
$${} \triangle_{\text{NL}}u(x)=\frac{1}{2}\text{div}_{\text{NL}}\nabla_{\text{NL}}u(x)=\int_{\Omega}(u(y)-u(x))w(x, y)dy $$
Here, we outline the discrete nonlocal operators which will be used in the numerical computation. The weight w(x,y) is denoted by w i,j in the discrete setting. \(w_{i,j}=\text {exp}(-[\parallel f(N_{i})-f(N_{j})\parallel ^{2}_{2,a}]/h^{2})\), where N i and N j are called the neighbourhood centered at i and j. The discrete gradient and Laplace operators are given by \((\nabla _{\text {NLD}}u)_{i,j}=(u_{j}-u_{i})\sqrt {w_{i,j}}\) and \((\triangle _{\text {NLD}}u)_{i}=\sum _{j}(u_{j}-u_{i})w_{i,j}\). The discrete version of the divergence operator is represented as
$$(\text{div}_{\text{NLD}}v)_{i}=\sum_{j}(v_{i,j}-v_{j,i})\sqrt{w_{i,j}} $$

2.2 Split Bregman iteration

Split Bregman method [2326] has received a lot of attention recently because of its high efficiency in solving l 1-regularized problems. It is a practical algorithm for large-scale problems with fast computational speed. In [26], Goldstein and Osher introduced the split Bregman iteration to solve the general optimization problem of the form
$$ \min\limits_{u}\{|\phi(u)|+J(u)\} $$
(2)
where |·| denotes the l 1 norm. Both |ϕ(u)| and J(u) are convex functions. We shall assume ϕ(·) to be differentiable. The problem (2) can be rewritten as the following equivalent constrained minimization problem
$$\min\limits_{u,d}\{|d|+J(u)\} $$
such that d=ϕ(u). Then, we relax the constraints and convert it into an unconstrained problem:
$$ \min\limits_{u,d}\{|d|+J(u)+\frac{\lambda}{2}\|d-\phi(u)\|^{2}_{2}\} $$
(3)
where λ>0 is a constant. The solution of (3) via split Bregman iteration is
$${} \left(u^{k+1}, d^{k+1}\right)=\min\limits_{u,d}\left\{|d|+J(u)+\frac{\lambda}{2}\|d-\phi(u)-b^{k}\|^{2}_{2}\right\} $$
$${\kern20pt} b^{k+1}=b^{k}+\left(\phi\left(u^{k+1}\right)-d^{k+1}\right) $$

3 Image restoration model

In this section, we shall give a description of the new nonlocal image restoration model and present the corresponding algorithm via alternating directional minimization and split Bregman method.

3.1 New bi-regularized image restoration model

Inspired by [17, 19], we use nonlocal operators to CEP- L 2 model in [9]. We obtain a nonlocal version of the image restoration model which we call NLCEP- L 2. The new model for image restoration is characterized by means of the bi-regularization variational functional:
$${} \begin{aligned} \min\limits_{u,v}E(u,v)=\int_{\Omega}|\nabla_{\text{NL}}u|dx&+\frac{\alpha}{2}\int_{\Omega}|\triangle_{\text{NL}}v|^{2}dx\\&+\frac{1}{2\lambda}\int_{\Omega}\left(\,f-u-v\right)^{2}dx \end{aligned} $$
(4)
where α and λ are regularization parameters. This model can also be interpreted as a decomposition model f=u+v+w. Here, u,v, and w are respectively the discontinuous, piecewise smooth, and noise components. To solve the variational problem (4), we first employ an alternating minimization technique which alternatively minimizes one variable while fixing the other ones. So, we should consider the following two coupled minimization subproblems:
  1. (1)
    v being fixed, we search for u as a solution of
    $$ \min\limits_{u}\int_{\Omega}\left[|\nabla_{\text{NL}}u|+\frac{1}{2\lambda}\left(\,f-u-v\right)^{2}\right]dx $$
    (5)
     
  2. (2)
    u being fixed, find v which satisfies:
    $$ \min\limits_{v}\int_{\Omega}\left[\frac{\alpha}{2}|\triangle_{\text{NL}}v|^{2}+\frac{1}{2\lambda}\left(\,f-u-v\right)^{2}\right]dx $$
    (6)
     

3.2 The v-subproblem

For u fixed, the minimizer v of v-subproblem (6) is given by solving the corresponding Euler-Lagrange equation:
$$\left(\frac{1}{\lambda}I+\alpha\triangle^{2}_{\text{NL}}\right)v=\frac{1}{\lambda}(f-u) $$
where I is identity matrix. The linear elliptic equation can be solved efficiently by Gauss-Seidel iteration.

3.3 The u-subproblem

For v fixed, the u subproblem (5) is the minimization of nonlocal total variation (NLTV) regularization energy in essence. In 2005, Kindermann et al. established nonlocal bounded variation (NLBV) space and proved the existence of a minimizer for the denoising functional with the NLBV regularization [20]. In [27], Bresson gave a split Bregman method of NLTV energy minimization. According to their work, split Bregman iteration can be used directly in our paper since the new nonlocal functional in our paper is convex. We first replace NL u by d. This turns to a constrained problem as
$$\min\limits_{u}\int_{\Omega}\left[|d|+\frac{1}{2\lambda}\left(\,f-u-v\right)^{2}\right]dx $$
such that d=NL u. We solve this problem by transforming it into an unconstrained one.
$${} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} \min\limits_{u, d}\left\{\int_{\Omega}|d|dx+\frac{1}{2\lambda}\int_{\Omega}(f-u-v)^{2}dx+\frac{\mu}{2}\int_{\Omega}(d-\nabla_{\text{NL}}u)^{2}dx\right\} \end{aligned}}} $$
(7)
Then the split Bregman iteration for (7) is described as
$${} {\fontsize{9.4pt}{9.6pt}\selectfont{\begin{aligned} u^{k+1}=\text{arg} \min\limits_{u}\left\{\frac{1}{2\lambda}\|f-u-v\|^{2}_{2}+\frac{\mu}{2}\|d^{k}-\nabla_{\text{NL}}u-b^{k}\|^{2}_{2}\right\} \end{aligned}}} $$
(8)
$$ d^{k+1}=\text{arg} \min\limits_{d}\left\{\int_{\Omega}|d|dx+\frac{\mu}{2}\|d^{k}-\nabla_{\text{NL}}u-b^{k}\|^{2}_{2}\right\} $$
(9)
$$b^{k+1}=b^{k}+\left(\nabla_{\text{NL}}u^{k+1}-d^{k+1}\right) $$
To find the optimal value of u, the subproblem (8) can be solved. The Euler-Lagrange equation of (8) is given by
$$\left(\frac{1}{\lambda}I-\mu\triangle_{\text{NL}}\right)u^{k+1}=\frac{1}{\lambda}\left(\,f-v\right)+\mu \text{div}_{\text{NL}}\left(d^{k}-b^{k}\right) $$
Because the system is diagonally dominant, the solution of (8) can be obtained by the Gauss-Seidel method.
$$u^{k+1}=K^{-1}\text{rhs}^{k} $$
where
$$K=\frac{1}{\lambda}I-\mu\triangle_{\text{NL}} $$
$$rhs^{k}=\frac{1}{\lambda}\left(\,f-v\right)+\mu \text{div}_{\text{NL}}\left(d^{k}-b^{k}\right) $$
The optimal value of d can be computed by using shrinkage operators [26].
$$d^{k+1}=\text{shrink}\left(\nabla_{\text{NL}}u^{k+1}+b^{k}, 1/\mu\right) $$
where
$$\text{shrink}(x,\gamma)=\frac{x}{|x|}\cdot \text{max}(|x|-\gamma, 0). $$

To obtain the optimal value of u and d, the subproblem (8) and (9) are supposed to be solved to full convergence, however, it is found unnecessary in practice. For many applications, we have found that optimal efficiency is obtained when only one iteration of the inner loop is performed.

3.4 Algorithm of the new model

The alternating minimization method and split Bregman method are combined to obtain the algorithm for our new model. We summarize the algorithm for the bi-regularized model (4) as follows:

To implement the algorithm 1 for the new image restoration model (4), we give the discrete version of the Algorithm 1.

4 Numerical experiments

In this section, we demonstrate several numerical results for image denoising. We also compare with the CEP- L 2 model to show the effectiveness of our new model. In our experiments, we choose a squared window centered in i of size 11×11 pixels and a similarity square neighbourhood N j of 5×5 pixels. The iteration termination condition is \(\frac {\|u^{k}-u^{k-1}\|_{2}}{|u^{k}|_{2}}<\text {tol}\), where tol is a small positive number defined by the user. Note that in our algorithm, we set tol=2.5×10−3. The parameter α controls the smoothness of the component v in the geometric part of the image. The larger the value of α, the more the staircase effect will be. In our experiments, we found that 0.5≤α≤50 is appropriate for gray scale images with intensities from 0 to 255. The amount of noise removed from a given image is controlled by the parameters λ and μ. The larger they are, the more geometric features will be averaged.

Let f=u 0+n be the noisy version of the ordinary true image u 0 of size M×N and n stand for an additive white Gaussian noise. Generally speaking, n is a random noise with mean zero and standard deviation σ. To characterize the noise level, we shall use the peak signal to noise ratio (PSNR) to quantify how good a denoised image u is. The peak signal to noise ratio is defined by
$$\text{PSNR}=10\text{log}_{10}\left(\frac{255^{2}}{\text{MSE}}\right). $$
MSE is the mean squared error defined by
$$\text{MSE}=\frac{1}{\text{MN}}\sum_{i=1}^{M}\sum_{j=1}^{N}(u(i,j)-u_{0}(i,j))^{2}. $$
In Figs. 1 and 2, we show the denoising results from our new model and the CEP- L 2 model performed on the Lena image with additive Gaussian white noise of σ=10 and σ=15 respectively. In our experiments, we choose the parameters λ=2,α=1,μ=6 in Fig. 1 and λ=2,α=2,μ=10 in Fig. 2. From the results, we can see that our method performs better than CEP- L 2 model. To illustrate the advantages of our method, we give the details of Lena image denoised in Fig. 1. Comparing these details, we can conclude that there is a certain amount of staircase effect which is still remained in the Lena image denoised by CEP- L 2. The details of Lena image denoised by our method (NLCEP- L 2) are preserved well and have less staircase effect. In Fig. 3, we apply our new denoising model to the Cameraman image with Gaussian noise of standard deviation σ=10 and compare it with the CEP- L 2 model. We choose λ=2,α=2,μ=3 in our algorithm. In Fig. 4, we present our results for the Barbara image with highly textured patterns. The parameters are chosen as λ=0.2,α=0.5,μ=3.
Fig. 1

Denoising of the Lena image using different methods. Top left: the original image. Top right: a noisy image with Gaussian noise (σ=10). Middle left: the denoised image by CEP- L 2 with PSNR =32.3446. Middle right: by NLCEP- L 2 with PSNR =33.0145. Bottom left: the detail of Lena image denoised by CEP- L 2 (magnified by 1.5 times). Bottom right: the detail of Lena image denoised by NLCEP- L 2 (magnified by 1.5 times)

Fig. 2

Denoising of the Lena image using different methods. Top left: a noisy image with Gaussian noise(σ=15). Top right: the denoised image by CEP- L 2 with PSNR =29.7321. Bottom: by NLCEP- L 2 with PSNR =30.8995

Fig. 3

Denoising of the Cameraman image using different methods. Top left: the original image. Top right: a noisy image with Gaussian noise (σ=10). Bottom left: the denoised image by CEP- L 2 with PSNR =31.4397. Bottom right: by NLCEP- L 2 with PSNR =32.3308

Fig. 4

Denoising of the Woman image using different methods. Top left: the original image. Top right: a noisy image with Gaussian noise (σ=10). Bottom left: the denoised image by CEP- L 2 with PSNR =29.0179. Bottom right: by NLCEP- L 2 with PSNR =29.2275

From the Figs. 1, 2, 3 and 4, we can know that the image denoised by the new NLCEP- L 2 method is smoother. The new method produces sharper edges and suppresses jaggy artifacts better than CEP- L 2 method. It provides slightly better quality of denoised images than those of CEP- L 2 method.

To display the capability of denoising of the new model, we illustrate the noise image fuv in Figs. 5, 6, 7 and 8.
Fig. 5

The noise image fuv of the Lena image with Gaussian noise (σ=10). Left: the noise image obtained by CEP- L 2. Right: by NLCEP- L 2

Fig. 6

The noise image fuv of the Lena image with Gaussian noise (σ=15). Left: the noise image obtained by CEP- L 2. Right: by NLCEP- L 2

Fig. 7

The noise image fuv of the Cameraman image with Gaussian noise (σ=10). Left: the noise image obtained by CEP- L 2. Right: by NLCEP- L 2

Fig. 8

The noise image fuv of the Barbara image with Gaussian noise (σ=10). Left: the noise image obtained by CEP- L 2. Right: by NLCEP- L 2

We also show the efficiency of the new model by comparing with the classical TV methods. We did experiments for the Lena, Cameraman and Barbara images respectively with different noise deviation σ(σ=5,10,15,20). Table 1 displays the signal-to-noise ratio and peak signal noise ratio of the experiments. We can see that our new model obtains quite good results and performs better than the CEP- L 2 model and TV method, especially for small σ(σ=5,10). It is effective for low noised image, not only for the natural images representing many sharp edges but also the images with much texture.
Table 1

Comparison of SNR and PSNR between TV, CEP- L 2 and NLCEP- L 2

Image

Noise σ

TV

CEP- L2

NLCEP- L2

  

SNR

PSNR

SNR

PSNR

SNR

PSNR

Cameraman

10

18.5077

30.7430

19.2044

31.4397

20.0955

32.3308

Lena

10

17.3721

31.9383

17.7784

32.3446

18.4483

33.0145

Lena

15

15.0388

29.6050

15.2907

29.8569

16.3500

30.9162

Cameraman

20

14.5561

26.7914

15.1091

27.3444

16.4166

28.6519

Barbara

15

11.6077

25.7100

12.2180

26.3203

12.4654

26.5676

Barbara

5

18.8485

32.9508

19.3232

33.4252

20.1640

34.2663

5 Conclusions

In this paper, we present a new nonlocal variational bi-regularized model for image restoration, which is the nonlocal version of the CEP- L 2 model. We apply the alternating minimization technique and the split Bregman algorithm to solve the variational bi-regularized minimization problem. By applying the new model to image denoising problems, we show that it is an effective technique that can produce satisfactory denoised image. The experimental results have verified that it can obtain better results compared to some previous methods.

Declarations

Acknowledgements

This work is supported in part by two grants from the National Natural Science Fund of China (Nos. 61201431, 61170253), SDUST Research Fund (No. 2012KYTD105) and Qingdao Postdoctoral Research Project.

Authors’ Affiliations

(1)
College of Mathematics and Systems Science, Shandong University of Science and Technology
(2)
College of Information and Science Engineering, Shandong University of Science and Technology

References

  1. LI Rudin, S Osher, E Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena. 60(1-4), 259–268 (1992).MATHView ArticleGoogle Scholar
  2. M Zhu, SJ Wright, TF Chan, Duality-based algorithms for total-variation-regularized image restoration. Comput. Optim. Appl. 47, 377–400 (2010).MATHMathSciNetView ArticleGoogle Scholar
  3. Y Chen, T Wunderli, Adaptive total variation for image restoration in BV space. J. Math. Anal. Appl. 272(1), 117–137 (2002).MATHMathSciNetView ArticleGoogle Scholar
  4. Y Meyer, Oscillating patterns in image processing and nonlinear evolution equations. University Lecture Series, No.22 (American Mathematical Society, Oxford, 2001).Google Scholar
  5. L Vese, S Osher, Modeling textures with total variation minimization and oscillating patterns in image processing. J. Sci. Comput. 19(1-3), 553–572 (2003).MATHMathSciNetView ArticleGoogle Scholar
  6. L Vese, S Osher, Image denoising and decomposition with total variation minimization and oscillatory functions, Special Issue on Mathematics and Image Analysis. J. Math. Imaging Vis. 20(1-2), 7–18 (2004).MathSciNetView ArticleGoogle Scholar
  7. S Osher, A Sole, L Vese, Image decomposition and restoration using total variation minimization and the H −1 norm. Multiscale Model Simul. 1(3), 349–370 (2003).MATHMathSciNetView ArticleGoogle Scholar
  8. A Chambolle, P Lions, Image recovery via total variation minimization and related problems. Numer. Math. 76(2), 319–335 (1997).MathSciNetView ArticleGoogle Scholar
  9. TF Chan, S Esedoglu, FE Park, Image decomposition combining staircase reduction and texture extraction. J. Vis. Commun. Image Representation. 18, 464–486 (2007).View ArticleGoogle Scholar
  10. B A Baudes, JM Coll, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 20-26 June. Morel, A non-local algorithm for image denoising (IEEE Computer Society PressSan Diego, CA, USA, p. 2005.Google Scholar
  11. A Baudes, B Coll, JM Morel, On image denoising method. SIAM Multiscale Model Simul. 4(2), 490–530 (2005).View ArticleGoogle Scholar
  12. G Gilboa, S Osher, Nonlocal linear image regularization and supervised segmentation. SIAM Multiscale Model Simul. 6(2), 595–630 (2007).MATHMathSciNetView ArticleGoogle Scholar
  13. G Gilboa, S Osher, Nonlocal operators with applications to image processing. SIAM Multiscale Model Simul. 7(3), 1005–1028 (2008).MATHMathSciNetView ArticleGoogle Scholar
  14. G Gilboa, J Darbon, S Osher, T Chan, Nonlocal convex functionals for image regularization. Technical report 06-57, UCLA CAM Report (2006).Google Scholar
  15. G Peyre, S Bougleux, LD Cohen, Non-local regularization of inverse problems. Inverse Problems Imaging. 5(2), 511–530 (2011).MATHMathSciNetView ArticleGoogle Scholar
  16. Y Lou, X Zhang, S Osher, A Bertozzi, Image recovery via nonlocal operators. J. Sci. Comput. 42(2), 185–197 (2010).MATHMathSciNetView ArticleGoogle Scholar
  17. X Zhang, M Burger, X Bresson, S Osher, Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM J. Imaging Sci. 3(3), 253–276 (2010).MATHMathSciNetView ArticleGoogle Scholar
  18. X Zhang, T Chan, Wavelet inpainting by nonlocal total variation. Inverse Problems Imaging. 4, 191–210 (2010).MATHMathSciNetView ArticleGoogle Scholar
  19. Y Jin, J Jost, G Wang, A nonlocal version of the Osher-Sole-Vese model. J. Math Imaging Vis. 44, 99–113 (2012).MATHMathSciNetView ArticleGoogle Scholar
  20. S Kindermann, S Osher, PW Jones, Deblurring and denoising of images by nonlocal functionals. Multiscale Model Simul. 4(4), 1091–1115 (2005).MATHMathSciNetView ArticleGoogle Scholar
  21. D Zhou, B Scholkopf, A regularization framework for learning from graph data. ICML on Statatistical Relational Learning and Its Connections to Other FieldsSpringer, (Banff, Canada, 2004).Google Scholar
  22. D Zhou, B Scholkopf, in Proceedings of the 27th DAGM Symposium. Regularization on discrete spaces. Pattern Recognition (Springer-Verlag Berlin HeidelbergVienna, Austria, 2005), pp. 361–368.Google Scholar
  23. W Yin, S Osher, D Goldfarb, J Darbon, Bregman iterative algorithm for l 1 -minimization with application to compressed sensing. SIAM J. Imaging Sci. 1, 143–168 (2008).MATHMathSciNetView ArticleGoogle Scholar
  24. X Liu, L Huang, Split Bregman iteration algorithm for total bounded variation regularization based image deblurring. J. Math. Anal. Appl. 372(2), 486–495 (2010).MATHMathSciNetView ArticleGoogle Scholar
  25. C Jian-feng, O Stanley, Z Shen, Split Bregman methods and frame based image restoration. Multiscale Model Simul. 8(2), 337–369 (2010).MATHView ArticleGoogle Scholar
  26. T Goldstein, S Osher, The split Bregman algorithm for l 1 regularized problems. SIAM J. Imaging Sci. 2, 323–343 (2009).MATHMathSciNetView ArticleGoogle Scholar
  27. X Bresson, A short note for nonlocal TV minimization. Technical report (2009).Google Scholar

Copyright

© Jiang et al. 2015

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.