 Research
 Open Access
 Published:
A combined total variation and bilateral filter approach for image robust super resolution
EURASIP Journal on Image and Video Processing volume 2015, Article number: 19 (2015)
Abstract
In this paper, we consider the image superresolution (SR) reconstitution problem. The main goal consists of obtaining a highresolution (HR) image from a set of lowresolution (LR) ones. For that, we propose a novel approach based on a regularized criterion. The criterion is composed of the classical generalized total variation (TV) but adding a bilateral filter (BTV) regularizer. The main goal of our approach consists of the derivation and the use of an efficient combined deblurring and denoising stage that is applied on the highresolution image. We demonstrate the existence of minimizers of the combined variational problem in the bounded variation space, and we propose a minimization algorithm. The numerical results obtained by our approach are compared with the classical robust superresolution (RSR) algorithm and the SR with TV regularization. They confirm that the proposed combined approach allows to overcome efficiently the blurring effect while removing the noise.
1 Introduction
The problem of the reconstruction of a superresolution image from lowresolution ones is required in numerous applications such as video surveillance [1], medical diagnostics [2] and image satellite [3].
A socalled fast robust superresolution procedure was proposed in [4]. In this approach, Farsiu et al. proposed a twostage approach. In the first stage, a highresolution image is built, but having the problem of being blurred. Then, in the second stage, a deblurring and denoising procedure is considered, see [4, 5]. Our paper will focus on this second stage in the context of super resolution. The main goal consists of increasing the robustness of the superresolution (SR) technique in [4] with respect to the blurring effect and to the noise.
In most cases, the problem of image deblurring or denoising is an illposed one. It is the main reason why the problem is considered as an optimization one, but considering a regularized criterion. Some of the widely used regularization functions are Tikhonovtype regularizer [6, 7] and total variationtype regularizer [4, 8, 9]. In the following, we will consider a total variation (TV) regularization framework, but adding a bilateral filtering part [4, 10]. The main point of this combination mainly consists of preserving the essential features of the image such as boundaries and corners that are degraded, using other approaches.
The outline of the paper is as follows. In Section 2, we present the general superresolution problem. Then, in Section 3, we present the proposed regularized criterion after pointing out the different regularization used in the literature. Hence, we introduce the variational problem and we prove the existence of a minimizing solution of the relaxed functional using standard techniques from calculus of variations. In Section 4, we derive the proposed algorithm, and in Section 5, we present some experimental results; in addition, we compare our approach with some existing ones in the literature. We finally end the paper by a conclusion.
2 Problem formulation
The observed images of a real scene usually are in low resolution. This is due to some degradation operators. Moreover, in practice, the acquired images are decimated, corrupted by noise and suffered from blurring [11–13]. We assume that all lowresolution images are taken under the same environmental conditions using the same sensor.
The relationship between an ideal highresolution (HR) image X (represented by a vector of size [r ^{2} N ^{2}×1], where r is the resolution enhancement factor) and the corresponding lowresolution (LR) ones Y _{ k } (represented by a vector of size [N ^{2}×1]), is described by the following model
where H is the blurring operator of size [r ^{2} N ^{2}×r ^{2} N ^{2}], D represents the decimation matrix of size [N ^{2}×r ^{2} N ^{2}], F _{ k } is a geometric warp matrix of size [r ^{2} N ^{2}×r ^{2} N ^{2}], representing a nonparametric transformation that differs in all frames, and e _{ k } is a vector of size [N ^{2}×1] that represents the additive noise for each image.
Given LR images Y _{ k }, k=1,…,n, the goal of SR consists of reconstructing the original image X. Because of the presence of the different degradation operators, the problem is difficult and illposed. In this paper, we follow the approach in [4] that suggests to separate it into three steps

1.
Computing the warp matrix F _{ k } for each image.

2.
Fusing the LR images Y _{ k } into a blurred HR version B=H X.

3.
Finding the estimation of the HR image X from the blurring and noised one B.
We will not detail the first and second steps in the following sections; for more details, see [5, 14]. We will focus on the last step which is a deconvolution and denoising step.
3 Deconvolution and denoising step
In this step, we compute the HR image \(\widehat {\mathbf {X}}\) through a deblurring process of the image B, obtained from the fusion step. Unfortunately, this inverse problem is illposed in presence of noise and blur. To overcome this difficulty, we impose some prior knowledge on the HR image X in a Bayesian framework. Since X has been known in the presence of white Gaussian noise, the measured vector Y _{ k } is also a Gaussian one. Via the Bayes rule, finding the HR image \(\widehat {\mathbf {X}}\) is equivalent to solve the minimization problem (2) using the maximum a posteriori (MAP) superresolution algorithm.
where p(BX) represents the likelihood term defined as
the norm of the Lebesgue space L^{1}(Ω): \(\Vert HX\widehat {B} \Vert _{1}\), is used since it is very robust against outliers [4]. p(X) denotes the prior knowledge on the HR image, described by the prior Gibbs function (PGF). We present in the following subsection the related work to the choice of the PGF function.
3.1 Related work
There are different manners to describe the function PGF, one of the classical choices was the Tikhonovtype PGF [15, 16], described as
where Γ is a highpass operator such as Laplacian.
Knowing that edges are generally the most important features in an image, the Tikhonov regularizer is not a suitable choice since it tries to limit the highfrequency component of the image and in most cases destroys sharp edges. Another successful regularization was the TVtype [4, 11, 17], defined as
where f is strictly convex and nondecreasing function from \(\mathbb {R}^{+}\) to \(\mathbb {R}^{+}\) such as f(0)=0 and \(\lim \limits _{x \to +\infty }f(x)=+\infty \). The choice of this PGF function was typically in the denoising and deblurring process in many restoration problems [18] since it preserves edges in the reconstruction, but sometimes causes some artificial edges in the smooth surfaces.
A more robust choice of PGF was the BTV regularization, which considers larger neighborhood in the calculating of the gradient at a certain pixel, which leads to preserve the sharp edges with less artefact. The expression of BTV regularization looks like
The operators \({S^{i}_{x}}\) and \({S^{j}_{y}}\) shift X by i and j pixels in horizontal and vertical directions, respectively, presenting several scales of derivatives. The scalar weight α (0<α<1) is applied to give a spatially decaying effect to the summation of the regularization terms. p is the spatial window size and i+j>0.
Recently, a new robust regularization term, called bilateral edgepreserving (BEP) regularization, was introduced to preserving edges by smoothing a range of small gradients, it is defined as
where the parameter c is the threshold and ρ(x,c) the potential function that penalize the gradient.
3.2 The proposed regularization
Based on the strengths and weaknesses of the regularizations cited above, we propose to combine the TV and BTV regularization in the deconvolution and denoising stage. The main idea behind this combination is to regularize with a fairly large weight γ, in TV term, to preserving the essential image features, such as boundaries and corners, as good as possible and not using too large weight δ for the BTV term to preserve sharp edges and avoid artefacts (staircasing) caused by a TV regularizer. Thus, we propose the PGF as follows
We will rewrite the problem (2) by substituting p(X) and p(BX) using there expressions in (8) and (3), respectively, which will constitute the final SR problem defined as
we suppose, in addition, that f is a linear growth function, i.e. ∃c>0 and b≥0 such that
Based on this assumption, we can seek a solution for (9) in the Sobolev space W ^{1,1}(Ω) [5, 19], where Ω is the domain of the image
Since this space is nonreflexive, we cannot say anything about a bounded minimizing sequence in W ^{1,1}(Ω). To overcome the illposedness of this problem, we use the procedure of relaxation. A typical choice of the space that guarantees the compactness results is the space of functions of bounded variation B V(Ω) [18].
3.2.1 3.2.1 The Proprieties of B V(Ω) space
We summarize firstly some of the properties of the space B V(Ω) that we will use in the following theorems. We suppose in the following that Ω is bounded and has a Lipschitz boundary.
( P _{ 1 } ) Lower semicontinuity (l.s.c) in B V ( Ω )
Let be a sequence (u _{ n })∈B V(Ω) such as : \(u_{n} \underset {\mathrm {L}^{1}(\Omega)}{\longrightarrow } u\), then
( P _{ 2 } ) The weak* topology in B V ( Ω )
The weak* topology in B V(Ω) noted B V−ω∗ is defined such as
where \({Du}_{n}\overset {*}{\underset {M} {\rightharpoonup }}Du\), signifies
\(\mathcal {C}^{1}_{0}(\Omega)^{N}\) is the space of continuously differentiable functions with compact support in Ω.
( P _{ 3 } ) Compactness results of B V ( Ω )

The space B V(Ω) is continuously embedded in L^{2}(Ω) (N = 2 the dimension of the space).

Every uniformly bounded sequence (X _{ j }) in B V(Ω) is relatively compact in L^{p}(Ω) for \(1 \leq p < \frac {N}{N1}, N \geq 1\). Moreover, there exists a subsequence (X _{ jk }) and X∈B V(Ω) such as \(X_{\textit {jk}}\overset {}{\underset {BV\omega *} {\rightharpoonup }} X \).
For the reason that every bounded sequence in W ^{1,1}(Ω) is also bounded in B V(Ω), we use the classical characteristics of the B V−ω∗ topology to deduce the existence of a subsequence that converges B V−ω∗. Let us define the relaxed function of the problem (9).
Theorem 3.1.
The relaxed function associated to the problem (9), for the B V−ω∗ topology is defined as
where D is the distributional gradient; in addition, we have
X ^{+} and X ^{−} are respectively the upper and lower limit as defined in [18].
\(\mathcal {H}\)is the Hansdorff measure and C _{ x } the Cantor part.
Proof We define firstly the function F
If X∈W ^{1,1}(Ω), we have \(F(X)=\overline {F}(X)\). Let us take a sequence \((X_{k})_{k \in \mathbb {N}}\) that converges to X in B V(Ω), from the l.s.c of \(\overline {F}\), we have
Since \(\overline {F}(X) \leq F(X)\), we get that
To prove the other inequality, we use the theorem in [21]. For each X∈B V(Ω), ∃(X _{ k })∈C^{∞}(Ω)∩W ^{1,1}(Ω) such that
Since H:L^{1}(Ω)→L^{1}(Ω) is continuous then
Also, the operator \((I{S^{i}_{x}}{S^{j}_{y}})\) is continuous in L^{1}(Ω) for every i and j such as i+j>0. Then
Using the continuity of f, we have
Using (14), (15) and (16), we deduce that
From (13) and (17), we have finally
Let us prove now the existence of the problem (9).
Theorem 3.2.
We assume that the operators \((I{S^{i}_{x}}{S^{j}_{y}})\) and H defined: L^{1}(Ω)→L^{1}(Ω) are continuous, and in addition, H does not annihilate the constants, in particular (H.1≠0). We keep also all assumptions on f defined above. Then the minimization problem
admits a solution X∈B V(Ω).
Proof Let \((X_{k})_{k \in \mathbb {N}}\) be a minimizing sequence for (19), using the assumption on f in (10), we can deduce that there exist c _{1},c _{2} and c _{3} positive constants such as
and
The inequality (22) says that the total variation is bounded; we have to prove now that ∥X _{ k }∥_{1} is also bounded. We use the classical approach proposed in [22]. We construct two sequences such that\(Y_{k}=\frac {1}{\vert \Omega \vert }\int _{\Omega }X_{k} \,dx,\) and Z _{ k }=X _{ k }−Y _{ k }, then
Using the generalized PoincaréWirtinger inequality [23], there exists a constant c _{4} such that
By the inequality (23) and the relation (22), we have
Then
with \(\Vert Y_{k}\Vert _{\mathrm {L}^{2}(\Omega)}=\vert \int _{\Omega }X_{k} \,dx\vert.\) We have also
where \(C=c_{5}\Vert H\Vert _{\mathrm {L}^{\infty }(\Omega)} c_{4}.c_{3}+c_{1}+c_{6},\) we have finally
Since H.1≠0, from (26) and (28), we can deduce that the sequence \((X_{k})_{k \in \mathbb {N}}\) is bounded in L^{2}(Ω) and Ω is bounded so it is also bounded in L^{1}(Ω). Finally, using (22) and (26), we find that \((X_{k})_{k \in \mathbb {N}}\) is bounded in B V(Ω). Using the propriety P _{ 3 } 3.2.1, there exists a subsequence noted also \((X_{k})_{k \in \mathbb {N}}\) such that
Since H is continuous, we have
and
Since \(\overline {F}\) is weak l.s.c, we deduce that
i.e. X is a minimum of \(\overline {F}\).
For the uniqueness, we cannot say anything since the L^{1} norm is not strictly convex. However, if we replace the norm ∥H X _{ k }−B∥_{1} by \(\Vert {HX}_{k}B {\Vert _{2}^{2}}\), we can check easily the uniqueness of the solution.
4 Proposed algorithm
In this section, we describe the numerical approach to the minimization problem (9). To discretize this problem, we use the classical approach based on the descritization of its gradient descent partial differential equation (PDE). We can also use the split Bregman algorithm [17] to resolve the problem (9). Using the calculus of variation techniques, the gradient descent PDE associated to the problem (9) is described as
The minimizer of the problem (9) is obtained numerically by an explicit finite difference scheme that approximates this PDE. We will denote by X _{ i,j }, i,j=1,...N a discrete image and \(M=\mathbb {R}^{N^{2}}\) the set of all discrete image. The operators \({S^{i}_{x}}\) and \({S^{j}_{y}}\) are given in discretization form. In addition, the discretization of the operators ∇ and div is given by
and
where
To simplify this problem, we consider the case where f(x)=x, which coincides with the classical TV regularization. The algorithm associated to solve the problem (9) is finally given such as the following:
5 Numerical results
In this section, we evaluate the performance of the proposed algorithm. We construct a synthetic LR image to test our algorithm and compare it with the SR algorithm with TV regularization and robust superresolution (RSR) algorithms. The peak signaltonoise ratio (PSNR) is used to measure the quality of our approach. We choose a benchmark of six images (Fig. 1) with a different greylevel histogram.
We construct a n=20 input lowresolution frames for each image in Fig. 1, subsampling with a decimation factor r=4 and blurring with 5×3 Gaussian blur kernel with a standard deviation equal to 3 for all the tested images. Moreover, we add an additive white Gaussian noise e _{ k } arbitrary in each frame with σ=10. The parameters chosen for our algorithm are α=0.5, γ=0.4, η=1 and P=2. There are different choices of the function f that verifies the assumptions above such as the choice taken in the algorithm above, we also choose the socalled hypersurface minimal function defined as
In Table 1, the PSNR values are shown for the six different images in the figure (Fig. 1) with different choices of σ noise. The best value of the PSNR is in italicized number on each row. We can easily deduce that our model is always better than the others, which assures the efficiency of our algorithm.
In Figs. 2, 3, 4, 5, 6 and 7, we have shown the simulated HR images compared with SR using a TV regularization [8] and the RSR [4] for Fig. 1a–f, respectively. Visually, we can assure that our result suppresses the noise and errors caused by misregistration and point spread function misestimation, even if we observe that the noise is not totally removed. Typically, the execution of the main implemented programme requires an average of 2 ∼15 min on a 3.0 GHz Pentium Quad core computer for 256×256 greyscale images; for the color and largesize images, we can use the proposed algorithm [24], which use manycore processors to accelerate the proposed method.
6 Conclusions
We propose a new combination of TV and BTV in the space of bounded variation applied in the deblurring step of the robust superresolution problem. We prove the existence of minimizers using a relaxation technique. Finally, we perform the choice of our model using the PSNR criteria in Section 5.
References
Q Luong, Advanced image and video resolution enhancement techniques. PhD thesis, Faculty of Engineering Ghent University, (2009).
D Hill, J Hajnal, D Hawkes (eds.), Medical Image Registration (CRC, 2001).
H Zhang, Z Yang, Li Zhang, H Shen, Superresolution reconstruction for multiangle remote sensing images considering resolution differences. Remote Sens. 6, 637–657 (2014).
S Farsiu, M Dirk, M Elad, P Milanfar, Fast and robust multiframe super resolution. IEEE Trans. Image Process. 13, 1327–1344 (2004).
A Laghrib, A Hakim, S Raghay, M EL Rhabi, Robust super resolution of images with nonparametric deformations using an elastic registration. Appl. Math. Sci. 8, 8897–8907 (2014).
E Lee, M Kang, Regularized adaptive highresolution image reconstruction considering inaccurate subpixel registration. IEEE Trans. Image Process. 12, 806–813 (2003).
V Patanavijit, S Jitapunkul (eds.), A Robust Iterative Multiframe Superresolution Reconstruction Using a Huber Bayesian Approach with Huber–Tikhonov Regularization (International Symposium on Intelligent Signal Processing and Communications, Yonago, Japan, 2006).
M Ng, H Shen, E Lam, L Zhang, A total variation regularization based superresolution reconstruction algorithm for digital video. EURASIP J. Adv. Signal Process, 1–16 (2007). Article ID 74585.
L Rudin, S Osher, E Fatemi, Nonlinear total variation based noise removal algorithms. Physica D. 60, 259–268 (1992).
X Zenga, L Yangi, A robust multiframe superresolution algorithm based on halfquadratic estimation with modified BTV regularization. Digital Signal Process. 23, 98–109 (2013).
P Milanfar, SuperResolution Imaging (Digital Imaging and Computer Vision) (Taylor and Francis/CRC Press, 2010).
RY Tsai, TS Huang, Multiframe image restoration and registration. Advances in Computer Vision and Image Processing, vol. 1, chap. 7 (JAI Press, Greenwich, Conn, USA, 1984).
S Park, M Park, M Kang, Superresolution image reconstruction : a technical overview. 20(3), 21–36 (2003).
A Laghrib, A Hakim, S Raghay, ME Rhabi, A robust multiframe super resolution based on curvature registration and second order variational regularization. Int. J. Tomography Simul. 28, 63–71 (2015).
MK Park, MG Kang, Regularized highresolution reconstruction considering inaccurate motion information. Opt. Eng. 46, 117004 (2007).
V Patanavijit, S Jitapunkul, A lorentzian stochastic estimation for a robust iterative multiframe superresolution reconstruction with Lorentzian–Tikhonov regularization. EURASIP J. Adv. Signal Process, 1–21 (2007).
S Osher, M Burger, D Goldfarb, J Xu, W Yin, An iterative regularization method for total variationbased image restoration. Multiscale Model. Simul. 4, 460–489 (2005).
G Aubert, P Kornprobst, Mathematical Problems in Image Processing Partial Differential Equations and the Calculus of Variations. Second Edition (Springer, New York, 2006).
F Demengel, G Demengel, Espaces Fonctionnels. Utilisation dans la Résolution des équations aux Dérivées Partielles, Savoirs actuels (EDP Sciences, 2007).
L Ambrosio, A Compactness Theorem for a New Class of Functions of Bounded Variation (Boll. Un. Mat. Ital., 1989).
F Demengel, R Temam, Convex functions of a measure and applications. Indiana Univ. Math. J. 33, 673–709 (1984).
L Vese, Problèmes variationnels et edp pour l‘analyse d’images et l‘évolution de courbes. PhD thesis, Université de Nice SophiaAntipolis, Nov. (1996).
H Brezis, Functional Analysis, Sobolev Spaces and Partial Differential Equations (Springer, New York, 2011).
Y Zhang, J Xu, F Dai, J Zhang, Q Dai, F Wu, Efficient parallel framework for HEVC motion estimation on manycore processors. IEEE Trans. Circ. Syst. Video Technol. 24 (2014).
Acknowledgements
We are grateful to the anonymous referee for the corrections and useful suggestions that have improved this article.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Laghrib, A., Hakim, A. & Raghay, S. A combined total variation and bilateral filter approach for image robust super resolution. J Image Video Proc. 2015, 19 (2015). https://doi.org/10.1186/s1364001500754
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1364001500754
Keywords
 Super resolution
 Bilateral filter
 Bounded variation space
 Total variation
 Function relaxed