# A new nonlocal variational bi-regularized image restoration model via split Bregman method

- Dong-Huan Jiang
^{1}Email author, - Xue Tan
^{1}, - Yong-Quan Liang
^{2}and - Sheng Fang
^{2}

**2015**:15

https://doi.org/10.1186/s13640-015-0072-7

© Jiang et al. 2015

**Received: **12 November 2014

**Accepted: **19 May 2015

**Published: **6 June 2015

## Abstract

In this paper, we propose a new variational model for image restoration by incorporating a nonlocal TV regularizer and a nonlocal Laplacian regularizer on the image. The two regularizing terms make use of nonlocal comparisons between pairs of patches in the image. The new model can be seen as a nonlocal version of the CEP- *L*
_{2} model. Subsequently, an algorithm combining the alternating directional minimization and the split Bregman iteration is presented to solve the new model. Numerical results verified that the proposed method has better performance for image restoration than CEP- *L*
_{2} model, especially for low noised images.

### Keywords

Image restoration Nonlocal Split Bregman Bi-regularized variational functional## 1 Introduction

Variational and PDE-based image restoration methods play an important role in image processing. Their goal is to recover an image *u* from the noisy version *f*. This is a typical example of inverse problem. The classical way to overcome inverse problem is to use regularization techniques. That is to say, images can be reconstructed by means of minimizing a variational energy functional. Many techniques for image restoration based on energy minimization have been presented [1–5]. The total variation (TV) regularization has become a well-known model in inverse problems because it enables sharp edges and fine details to be recovered.

*f*:

*Ω*→

*R*represents a noisy gray scale image.

*u*is the recovered image.

*λ*>0 is a tuning parameter. The first term in the energy is a regularizing term, and the second one is a fidelity term. Existence and uniqueness results for this minimization problem can be found in [1]. The Euler-Lagrange equation for the TV minimization is (formally)

The model works very well for image denoising, deblurring, and decomposition. However, it cannot completely separate the cartoon part from the textural part and also produces staircase effects. There have existed plenty of variants and numerical attempts to overcome the problem.

The norm of G is defined as the lower bound of all *L*
_{
∞
} norms of \(|\overrightarrow {g}|\). This model is suitable to capture texture, however, it is not easy to handle in practice because the G norm is involved a term coming from an *L*
_{
∞
} norm. Subsequently, several related problems approximating the Meyer’s model are introduced in [5–7].

In this minimization problem, the term ∥*v*∥_{
G
} is approximated by \(\|v\|_{L_{p}}\) when *p*→*∞*.

*p*=2 and

*λ*→

*∞*in (1). We call it the OSV model:

Where the semi-norm \(|v|_{H^{-1}}\phantom {\dot {i}\!}\) is defined by \(|v|^{2}_{H^{-1}}=\int _{\Omega }|\nabla \triangle ^{-1}v|^{2}\). The model can be solved by the steepest descent method efficiently.

*L*

_{2}) has the following formulation:

Another kind of denoising model called nonlocal means filter, which is based on the assumption that natural images have mutually similar patches, is attracting more and more attention. Nonlocal means filter was first proposed by Buades et al. [10, 11]. Gilboa and Osher defined a nonlocal variational framework by embedding the nonlocal means into a variational formulation [12, 13]. After the work of Gilboa and Osher, many further research on image processing based on nonlocal variational methods have been obtained. See for example [14–19]. Meanwhile, many nonlocal regularizing models were proposed, for example, the nonlocal TV model, the nonlocal *H*
^{1} model, the nonlocal Meyer’s model [13], and the nonlocal OSV model [19].

In this paper, we shall focus on a new model which uses nonlocal TV and nonlocal laplace operator to regular an image, thus, it can effectively exploit the available information of the input image. Then, we formulate a nonlocal variational functional for image restoration which performs adaptive smoothing but also preserves edges. The rest of the paper is organized as follows. Section 2 recalls some results on nonlocal operators and split Bregman method. The proposed variational bi-regularized model for image restoration and its split Bregman algorithm are presented in Section 3. In Section 4, we demonstrate the experimental results on natural images and textured images which show the validity of the new model, and the conclusion is given in Section 5.

## 2 Preliminaries

### 2.1 Nonlocal operators

Nonlocal means filter was introduced by Buades et al. for image denosing, which takes advantage of the self similarity of the images [10, 11]. The idea is to restore an unknown pixel using other similar pixels. The nonlocal means filter is effective to deal with textures and fine details. Kindermann et al. used variational methods to understand the nonlocal means filter [20]. Gilboa-Osher introduced nonlocal operators to interpret the nonlocal means filter and formalized a systematic and coherent variational framework for nonlocal operators [13, 14].

*Ω*⊂

*R*

^{2},

*x*,

*y*∈

*Ω*and

*w*(

*x*,

*y*) be a weight function which is symmetric and non-negative. For a function

*u*:

*Ω*→

*R*, the nonlocal gradient ∇

_{ NL }

*u*(

*x*,

*y*):

*Ω*×

*Ω*→

*R*is defined by

*Ω*×

*Ω*to

*R*. The map from

*Ω*×

*Ω*to

*R*is called an

*NL vector*. For a pair of

*NL vectors*

*p*and

*q*, the scalar product is defined by

*NL vector*

*p*at

*x*is as the following:

*u*at

*x*is defined by \(|\nabla _{\text {NL}}u|(x)=\sqrt {\int _{\Omega }(u(y)-u(x))^{2}w(x, y)dy}\). The nonlocal divergence operator div

_{NL}

*v*(

*x*):

*Ω*→

*R*is defined by

*u*can now be defined by

*w*(

*x*,

*y*) is denoted by

*w*

_{ i,j }in the discrete setting. \(w_{i,j}=\text {exp}(-[\parallel f(N_{i})-f(N_{j})\parallel ^{2}_{2,a}]/h^{2})\), where

*N*

_{ i }and

*N*

_{ j }are called the neighbourhood centered at

*i*and

*j*. The discrete gradient and Laplace operators are given by \((\nabla _{\text {NLD}}u)_{i,j}=(u_{j}-u_{i})\sqrt {w_{i,j}}\) and \((\triangle _{\text {NLD}}u)_{i}=\sum _{j}(u_{j}-u_{i})w_{i,j}\). The discrete version of the divergence operator is represented as

### 2.2 Split Bregman iteration

*l*

_{1}-regularized problems. It is a practical algorithm for large-scale problems with fast computational speed. In [26], Goldstein and Osher introduced the split Bregman iteration to solve the general optimization problem of the form

*l*

_{1}norm. Both |

*ϕ*(

*u*)| and

*J*(

*u*) are convex functions. We shall assume

*ϕ*(·) to be differentiable. The problem (2) can be rewritten as the following equivalent constrained minimization problem

*d*=

*ϕ*(

*u*). Then, we relax the constraints and convert it into an unconstrained problem:

*λ*>0 is a constant. The solution of (3) via split Bregman iteration is

## 3 Image restoration model

In this section, we shall give a description of the new nonlocal image restoration model and present the corresponding algorithm via alternating directional minimization and split Bregman method.

### 3.1 New bi-regularized image restoration model

*L*

_{2}model in [9]. We obtain a nonlocal version of the image restoration model which we call NLCEP-

*L*

_{2}. The new model for image restoration is characterized by means of the bi-regularization variational functional:

*α*and

*λ*are regularization parameters. This model can also be interpreted as a decomposition model

*f*=

*u*+

*v*+

*w*. Here,

*u*,

*v*, and

*w*are respectively the discontinuous, piecewise smooth, and noise components. To solve the variational problem (4), we first employ an alternating minimization technique which alternatively minimizes one variable while fixing the other ones. So, we should consider the following two coupled minimization subproblems:

- (1)
*v*being fixed, we search for*u*as a solution of$$ \min\limits_{u}\int_{\Omega}\left[|\nabla_{\text{NL}}u|+\frac{1}{2\lambda}\left(\,f-u-v\right)^{2}\right]dx $$(5) - (2)
*u*being fixed, find*v*which satisfies:$$ \min\limits_{v}\int_{\Omega}\left[\frac{\alpha}{2}|\triangle_{\text{NL}}v|^{2}+\frac{1}{2\lambda}\left(\,f-u-v\right)^{2}\right]dx $$(6)

### 3.2 The v-subproblem

*u*fixed, the minimizer

*v*of v-subproblem (6) is given by solving the corresponding Euler-Lagrange equation:

*I*is identity matrix. The linear elliptic equation can be solved efficiently by Gauss-Seidel iteration.

### 3.3 The u-subproblem

*v*fixed, the

*u*subproblem (5) is the minimization of nonlocal total variation (NLTV) regularization energy in essence. In 2005, Kindermann et al. established nonlocal bounded variation (NLBV) space and proved the existence of a minimizer for the denoising functional with the NLBV regularization [20]. In [27], Bresson gave a split Bregman method of NLTV energy minimization. According to their work, split Bregman iteration can be used directly in our paper since the new nonlocal functional in our paper is convex. We first replace ∇

_{NL}

*u*by

*d*. This turns to a constrained problem as

*d*=∇

_{NL}

*u*. We solve this problem by transforming it into an unconstrained one.

*u*, the subproblem (8) can be solved. The Euler-Lagrange equation of (8) is given by

*d*can be computed by using shrinkage operators [26].

To obtain the optimal value of *u* and *d*, the subproblem (8) and (9) are supposed to be solved to full convergence, however, it is found unnecessary in practice. For many applications, we have found that optimal efficiency is obtained when only one iteration of the inner loop is performed.

### 3.4 Algorithm of the new model

The alternating minimization method and split Bregman method are combined to obtain the algorithm for our new model. We summarize the algorithm for the bi-regularized model (4) as follows:

To implement the algorithm 1 for the new image restoration model (4), we give the discrete version of the Algorithm 1.

## 4 Numerical experiments

In this section, we demonstrate several numerical results for image denoising. We also compare with the CEP- *L*
_{2} model to show the effectiveness of our new model. In our experiments, we choose a squared window centered in *i* of size 11×11 pixels and a similarity square neighbourhood *N*
_{
j
} of 5×5 pixels. The iteration termination condition is \(\frac {\|u^{k}-u^{k-1}\|_{2}}{|u^{k}|_{2}}<\text {tol}\), where *tol* is a small positive number defined by the user. Note that in our algorithm, we set tol=2.5×10^{−3}. The parameter *α* controls the smoothness of the component *v* in the geometric part of the image. The larger the value of *α*, the more the staircase effect will be. In our experiments, we found that 0.5≤*α*≤50 is appropriate for gray scale images with intensities from 0 to 255. The amount of noise removed from a given image is controlled by the parameters *λ* and *μ*. The larger they are, the more geometric features will be averaged.

*f*=

*u*

_{0}+

*n*be the noisy version of the ordinary true image

*u*

_{0}of size

*M*×

*N*and

*n*stand for an additive white Gaussian noise. Generally speaking,

*n*is a random noise with mean zero and standard deviation

*σ*. To characterize the noise level, we shall use the peak signal to noise ratio (

*PSNR*) to quantify how good a denoised image

*u*is. The peak signal to noise ratio is defined by

*MSE*is the mean squared error defined by

*L*

_{2}model performed on the Lena image with additive Gaussian white noise of

*σ*=10 and

*σ*=15 respectively. In our experiments, we choose the parameters

*λ*=2,

*α*=1,

*μ*=6 in Fig. 1 and

*λ*=2,

*α*=2,

*μ*=10 in Fig. 2. From the results, we can see that our method performs better than CEP-

*L*

_{2}model. To illustrate the advantages of our method, we give the details of Lena image denoised in Fig. 1. Comparing these details, we can conclude that there is a certain amount of staircase effect which is still remained in the Lena image denoised by CEP-

*L*

_{2}. The details of Lena image denoised by our method (NLCEP-

*L*

_{2}) are preserved well and have less staircase effect. In Fig. 3, we apply our new denoising model to the Cameraman image with Gaussian noise of standard deviation

*σ*=10 and compare it with the CEP-

*L*

_{2}model. We choose

*λ*=2,

*α*=2,

*μ*=3 in our algorithm. In Fig. 4, we present our results for the Barbara image with highly textured patterns. The parameters are chosen as

*λ*=0.2,

*α*=0.5,

*μ*=3.

From the Figs. 1, 2, 3 and 4, we can know that the image denoised by the new NLCEP- *L*
_{2} method is smoother. The new method produces sharper edges and suppresses jaggy artifacts better than CEP- *L*
_{2} method. It provides slightly better quality of denoised images than those of CEP- *L*
_{2} method.

*f*−

*u*−

*v*in Figs. 5, 6, 7 and 8.

*σ*(

*σ*=5,10,15,20). Table 1 displays the signal-to-noise ratio and peak signal noise ratio of the experiments. We can see that our new model obtains quite good results and performs better than the CEP-

*L*

_{2}model and TV method, especially for small

*σ*(

*σ*=5,10). It is effective for low noised image, not only for the natural images representing many sharp edges but also the images with much texture.

Comparison of *SNR* and *PSNR* between TV, CEP- *L*
_{2} and NLCEP- *L*
_{2}

Image | Noise | TV | CEP- L | NLCEP- L | |||
---|---|---|---|---|---|---|---|

SNR | PSNR | SNR | PSNR | SNR | PSNR | ||

Cameraman | 10 | 18.5077 | 30.7430 | 19.2044 | 31.4397 | 20.0955 | 32.3308 |

Lena | 10 | 17.3721 | 31.9383 | 17.7784 | 32.3446 | 18.4483 | 33.0145 |

Lena | 15 | 15.0388 | 29.6050 | 15.2907 | 29.8569 | 16.3500 | 30.9162 |

Cameraman | 20 | 14.5561 | 26.7914 | 15.1091 | 27.3444 | 16.4166 | 28.6519 |

Barbara | 15 | 11.6077 | 25.7100 | 12.2180 | 26.3203 | 12.4654 | 26.5676 |

Barbara | 5 | 18.8485 | 32.9508 | 19.3232 | 33.4252 | 20.1640 | 34.2663 |

## 5 Conclusions

In this paper, we present a new nonlocal variational bi-regularized model for image restoration, which is the nonlocal version of the CEP- *L*
_{2} model. We apply the alternating minimization technique and the split Bregman algorithm to solve the variational bi-regularized minimization problem. By applying the new model to image denoising problems, we show that it is an effective technique that can produce satisfactory denoised image. The experimental results have verified that it can obtain better results compared to some previous methods.

## Declarations

### Acknowledgements

This work is supported in part by two grants from the National Natural Science Fund of China (Nos. 61201431, 61170253), SDUST Research Fund (No. 2012KYTD105) and Qingdao Postdoctoral Research Project.

## Authors’ Affiliations

## References

- LI Rudin, S Osher, E Fatemi, Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena. 60(1-4), 259–268 (1992).MATHView ArticleGoogle Scholar
- M Zhu, SJ Wright, TF Chan, Duality-based algorithms for total-variation-regularized image restoration. Comput. Optim. Appl. 47, 377–400 (2010).MATHMathSciNetView ArticleGoogle Scholar
- Y Chen, T Wunderli, Adaptive total variation for image restoration in BV space. J. Math. Anal. Appl. 272(1), 117–137 (2002).MATHMathSciNetView ArticleGoogle Scholar
- Y Meyer,
*Oscillating patterns in image processing and nonlinear evolution equations. University Lecture Series, No.22*(American Mathematical Society, Oxford, 2001).Google Scholar - L Vese, S Osher, Modeling textures with total variation minimization and oscillating patterns in image processing. J. Sci. Comput. 19(1-3), 553–572 (2003).MATHMathSciNetView ArticleGoogle Scholar
- L Vese, S Osher, Image denoising and decomposition with total variation minimization and oscillatory functions, Special Issue on Mathematics and Image Analysis. J. Math. Imaging Vis. 20(1-2), 7–18 (2004).MathSciNetView ArticleGoogle Scholar
- S Osher, A Sole, L Vese, Image decomposition and restoration using total variation minimization and the
*H*^{−1}norm. Multiscale Model Simul. 1(3), 349–370 (2003).MATHMathSciNetView ArticleGoogle Scholar - A Chambolle, P Lions, Image recovery via total variation minimization and related problems. Numer. Math. 76(2), 319–335 (1997).MathSciNetView ArticleGoogle Scholar
- TF Chan, S Esedoglu, FE Park, Image decomposition combining staircase reduction and texture extraction. J. Vis. Commun. Image Representation. 18, 464–486 (2007).View ArticleGoogle Scholar
- B A Baudes, JM Coll, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 20-26 June. Morel, A non-local algorithm for image denoising (IEEE Computer Society PressSan Diego, CA, USA, p. 2005.Google Scholar
- A Baudes, B Coll, JM Morel, On image denoising method. SIAM Multiscale Model Simul. 4(2), 490–530 (2005).View ArticleGoogle Scholar
- G Gilboa, S Osher, Nonlocal linear image regularization and supervised segmentation. SIAM Multiscale Model Simul. 6(2), 595–630 (2007).MATHMathSciNetView ArticleGoogle Scholar
- G Gilboa, S Osher, Nonlocal operators with applications to image processing. SIAM Multiscale Model Simul. 7(3), 1005–1028 (2008).MATHMathSciNetView ArticleGoogle Scholar
- G Gilboa, J Darbon, S Osher, T Chan, Nonlocal convex functionals for image regularization. Technical report 06-57, UCLA CAM Report (2006).Google Scholar
- G Peyre, S Bougleux, LD Cohen, Non-local regularization of inverse problems. Inverse Problems Imaging. 5(2), 511–530 (2011).MATHMathSciNetView ArticleGoogle Scholar
- Y Lou, X Zhang, S Osher, A Bertozzi, Image recovery via nonlocal operators. J. Sci. Comput. 42(2), 185–197 (2010).MATHMathSciNetView ArticleGoogle Scholar
- X Zhang, M Burger, X Bresson, S Osher, Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM J. Imaging Sci. 3(3), 253–276 (2010).MATHMathSciNetView ArticleGoogle Scholar
- X Zhang, T Chan, Wavelet inpainting by nonlocal total variation. Inverse Problems Imaging. 4, 191–210 (2010).MATHMathSciNetView ArticleGoogle Scholar
- Y Jin, J Jost, G Wang, A nonlocal version of the Osher-Sole-Vese model. J. Math Imaging Vis. 44, 99–113 (2012).MATHMathSciNetView ArticleGoogle Scholar
- S Kindermann, S Osher, PW Jones, Deblurring and denoising of images by nonlocal functionals. Multiscale Model Simul. 4(4), 1091–1115 (2005).MATHMathSciNetView ArticleGoogle Scholar
- D Zhou, B Scholkopf,
*A regularization framework for learning from graph data. ICML on Statatistical Relational Learning and Its Connections to Other Fields*Springer, (Banff, Canada, 2004).Google Scholar - D Zhou, B Scholkopf, in
*Proceedings of the 27th DAGM Symposium*. Regularization on discrete spaces. Pattern Recognition (Springer-Verlag Berlin HeidelbergVienna, Austria, 2005), pp. 361–368.Google Scholar - W Yin, S Osher, D Goldfarb, J Darbon, Bregman iterative algorithm for
*l*_{1}-minimization with application to compressed sensing. SIAM J. Imaging Sci. 1, 143–168 (2008).MATHMathSciNetView ArticleGoogle Scholar - X Liu, L Huang, Split Bregman iteration algorithm for total bounded variation regularization based image deblurring. J. Math. Anal. Appl. 372(2), 486–495 (2010).MATHMathSciNetView ArticleGoogle Scholar
- C Jian-feng, O Stanley, Z Shen, Split Bregman methods and frame based image restoration. Multiscale Model Simul. 8(2), 337–369 (2010).MATHView ArticleGoogle Scholar
- T Goldstein, S Osher, The split Bregman algorithm for
*l*_{1}regularized problems. SIAM J. Imaging Sci. 2, 323–343 (2009).MATHMathSciNetView ArticleGoogle Scholar - X Bresson, A short note for nonlocal TV minimization. Technical report (2009).Google Scholar

## Copyright

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.