Open Access

A combined total variation and bilateral filter approach for image robust super resolution

EURASIP Journal on Image and Video Processing20152015:19

https://doi.org/10.1186/s13640-015-0075-4

Received: 14 February 2015

Accepted: 9 June 2015

Published: 26 June 2015

Abstract

In this paper, we consider the image super-resolution (SR) reconstitution problem. The main goal consists of obtaining a high-resolution (HR) image from a set of low-resolution (LR) ones. For that, we propose a novel approach based on a regularized criterion. The criterion is composed of the classical generalized total variation (TV) but adding a bilateral filter (BTV) regularizer. The main goal of our approach consists of the derivation and the use of an efficient combined deblurring and denoising stage that is applied on the high-resolution image. We demonstrate the existence of minimizers of the combined variational problem in the bounded variation space, and we propose a minimization algorithm. The numerical results obtained by our approach are compared with the classical robust super-resolution (RSR) algorithm and the SR with TV regularization. They confirm that the proposed combined approach allows to overcome efficiently the blurring effect while removing the noise.

Keywords

Super resolution Bilateral filter Bounded variation space Total variation Function relaxed

1 Introduction

The problem of the reconstruction of a super-resolution image from low-resolution ones is required in numerous applications such as video surveillance [1], medical diagnostics [2] and image satellite [3].

A so-called fast robust super-resolution procedure was proposed in [4]. In this approach, Farsiu et al. proposed a two-stage approach. In the first stage, a high-resolution image is built, but having the problem of being blurred. Then, in the second stage, a deblurring and denoising procedure is considered, see [4, 5]. Our paper will focus on this second stage in the context of super resolution. The main goal consists of increasing the robustness of the super-resolution (SR) technique in [4] with respect to the blurring effect and to the noise.

In most cases, the problem of image deblurring or denoising is an ill-posed one. It is the main reason why the problem is considered as an optimization one, but considering a regularized criterion. Some of the widely used regularization functions are Tikhonov-type regularizer [6, 7] and total variation-type regularizer [4, 8, 9]. In the following, we will consider a total variation (TV) regularization framework, but adding a bilateral filtering part [4, 10]. The main point of this combination mainly consists of preserving the essential features of the image such as boundaries and corners that are degraded, using other approaches.

The outline of the paper is as follows. In Section 2, we present the general super-resolution problem. Then, in Section 3, we present the proposed regularized criterion after pointing out the different regularization used in the literature. Hence, we introduce the variational problem and we prove the existence of a minimizing solution of the relaxed functional using standard techniques from calculus of variations. In Section 4, we derive the proposed algorithm, and in Section 5, we present some experimental results; in addition, we compare our approach with some existing ones in the literature. We finally end the paper by a conclusion.

2 Problem formulation

The observed images of a real scene usually are in low resolution. This is due to some degradation operators. Moreover, in practice, the acquired images are decimated, corrupted by noise and suffered from blurring [1113]. We assume that all low-resolution images are taken under the same environmental conditions using the same sensor.

The relationship between an ideal high-resolution (HR) image X (represented by a vector of size [r 2 N 2×1], where r is the resolution enhancement factor) and the corresponding low-resolution (LR) ones Y k (represented by a vector of size [N 2×1]), is described by the following model
$$ \mathbf{Y_{k}} = D F_{k} H \mathbf{X}+ e_{k} \qquad \forall k=1,2,\ldots,n, $$
(1)

where H is the blurring operator of size [r 2 N 2×r 2 N 2], D represents the decimation matrix of size [N 2×r 2 N 2], F k is a geometric warp matrix of size [r 2 N 2×r 2 N 2], representing a non-parametric transformation that differs in all frames, and e k is a vector of size [N 2×1] that represents the additive noise for each image.

Given LR images Y k , k=1,…,n, the goal of SR consists of reconstructing the original image X. Because of the presence of the different degradation operators, the problem is difficult and ill-posed. In this paper, we follow the approach in [4] that suggests to separate it into three steps
  1. 1.

    Computing the warp matrix F k for each image.

     
  2. 2.

    Fusing the LR images Y k into a blurred HR version B=H X.

     
  3. 3.

    Finding the estimation of the HR image X from the blurring and noised one B.

     

We will not detail the first and second steps in the following sections; for more details, see [5, 14]. We will focus on the last step which is a deconvolution and denoising step.

3 Deconvolution and denoising step

In this step, we compute the HR image \(\widehat {\mathbf {X}}\) through a deblurring process of the image B, obtained from the fusion step. Unfortunately, this inverse problem is ill-posed in presence of noise and blur. To overcome this difficulty, we impose some prior knowledge on the HR image X in a Bayesian framework. Since X has been known in the presence of white Gaussian noise, the measured vector Y k is also a Gaussian one. Via the Bayes rule, finding the HR image \(\widehat {\mathbf {X}}\) is equivalent to solve the minimization problem (2) using the maximum a posteriori (MAP) super-resolution algorithm.
$$\begin{array}{@{}rcl@{}} \widehat{\mathbf{X}}&=& \underset{\mathbf{X}}{\text{argmax}} \lbrace p(\mathbf{X} |\mathbf{B}) \rbrace \\ &=& \underset{\mathbf{X}}{\text{argmax}} \left \{ \frac{p(\mathbf{B}|\mathbf{X}).p(\mathbf{X})}{p(\mathbf{B})}\right \} \\ &=&\underset{\mathbf{X}}{\text{argmin}}\left \{ -\log(p(\mathbf{B}|\mathbf{X}))- \log(p(\mathbf{X}))\right \}, \end{array} $$
(2)
where p(B|X) represents the likelihood term defined as
$$ p\left(\mathbf{B}|\mathbf{X}\right)=\exp\left(-\Vert H\mathbf{X}-\mathbf{B} \Vert_{1}\right), $$
(3)

the norm of the Lebesgue space L1(Ω): \(\Vert HX-\widehat {B} \Vert _{1}\), is used since it is very robust against outliers [4]. p(X) denotes the prior knowledge on the HR image, described by the prior Gibbs function (PGF). We present in the following subsection the related work to the choice of the PGF function.

3.1 Related work

There are different manners to describe the function PGF, one of the classical choices was the Tikhonov-type PGF [15, 16], described as
$$ p_{Tik}(\mathbf{X})=\exp\left(- \gamma \Vert \Gamma \mathbf{X}{\Vert_{2}^{2}}\right), $$
(4)

where Γ is a high-pass operator such as Laplacian.

Knowing that edges are generally the most important features in an image, the Tikhonov regularizer is not a suitable choice since it tries to limit the high-frequency component of the image and in most cases destroys sharp edges. Another successful regularization was the TV-type [4, 11, 17], defined as
$$ p_{TV}(\mathbf{X})=\exp\left(- \gamma \Vert f\left(|\nabla \mathbf{X}|\right)\Vert_{1}\right), $$
(5)

where f is strictly convex and non-decreasing function from \(\mathbb {R}^{+}\) to \(\mathbb {R}^{+}\) such as f(0)=0 and \(\lim \limits _{x \to +\infty }f(x)=+\infty \). The choice of this PGF function was typically in the denoising and deblurring process in many restoration problems [18] since it preserves edges in the reconstruction, but sometimes causes some artificial edges in the smooth surfaces.

A more robust choice of PGF was the BTV regularization, which considers larger neighborhood in the calculating of the gradient at a certain pixel, which leads to preserve the sharp edges with less artefact. The expression of BTV regularization looks like
$$ p_{\text{BTV}}(\mathbf{X})=\exp \left(-\delta \sum_{i=-p}^{p} \sum_{j=-p}^{p} \alpha^{|i|+|j|} \| \mathbf{X}-{S^{i}_{x}}{S^{j}_{y}}\mathbf{X} \|_{1}\right). $$
(6)

The operators \({S^{i}_{x}}\) and \({S^{j}_{y}}\) shift X by i and j pixels in horizontal and vertical directions, respectively, presenting several scales of derivatives. The scalar weight α (0<α<1) is applied to give a spatially decaying effect to the summation of the regularization terms. p is the spatial window size and i+j>0.

Recently, a new robust regularization term, called bilateral edge-preserving (BEP) regularization, was introduced to preserving edges by smoothing a range of small gradients, it is defined as
$${} {\fontsize{8.4pt}{9.6pt}\selectfont{\begin{aligned} p_{\text{BTV}}(\mathbf{X})=\exp\! \left(\! -\delta\! \sum_{i=-p}^{p} \sum_{j=-p}^{p} \sum_{m=1}^{N}\alpha^{|i|+|j|} \rho\left(\left(\!\mathbf{X}-{S^{i}_{x}}{S^{j}_{y}}\mathbf{X}\right)[\!m]\!\right),c\!\left.{\vphantom{{S^{j}_{y}}}}\!\right)\!\right)\!, \end{aligned}}} $$
(7)

where the parameter c is the threshold and ρ(x,c) the potential function that penalize the gradient.

3.2 The proposed regularization

Based on the strengths and weaknesses of the regularizations cited above, we propose to combine the TV and BTV regularization in the deconvolution and denoising stage. The main idea behind this combination is to regularize with a fairly large weight γ, in TV term, to preserving the essential image features, such as boundaries and corners, as good as possible and not using too large weight δ for the BTV term to preserve sharp edges and avoid artefacts (staircasing) caused by a TV regularizer. Thus, we propose the PGF as follows
$${} {\fontsize{8.4pt}{9.6pt}\selectfont{\begin{aligned} p_{}(\mathbf{X})=\exp \left(- \gamma \Vert f\left(\left|\nabla \mathbf{X}\right|\right)\Vert_{1}-\delta \sum_{i=-p}^{p} \sum_{j=-p}^{p} \alpha^{|i|+|j|} \left\| \mathbf{X}-{S^{i}_{x}}{S^{j}_{y}}\mathbf{X} \right\|_{1}\right). \end{aligned}}} $$
(8)
We will rewrite the problem (2) by substituting p(X) and p(B|X) using there expressions in (8) and (3), respectively, which will constitute the final SR problem defined as
$${} \begin{aligned} \widehat{X}=\underset{X}{\text{argmin}}&\left\lbrace\vphantom{\left.\delta \sum_{i=-p}^{p} \sum_{j=-p}^{p} \alpha^{|i|+|j|} \| X-{S^{i}_{x}}{S^{j}_{y}}X \|_{1} \right\rbrace} \Vert HX-B \Vert_{1}+ \gamma \Vert\, f\left(\left|\nabla X\right|\right)\Vert_{1}\right.\\&+\left.\delta \sum_{i=-p}^{p} \sum_{j=-p}^{p} \alpha^{|i|+|j|} \| X-{S^{i}_{x}}{S^{j}_{y}}X \|_{1} \right\rbrace, \end{aligned} $$
(9)
we suppose, in addition, that f is a linear growth function, i.e. c>0 and b≥0 such that
$$ cx-b \leq f(x) \leq cx+b. $$
(10)
Based on this assumption, we can seek a solution for (9) in the Sobolev space W 1,1(Ω) [5, 19], where Ω is the domain of the image
$$W^{1,1}(\Omega)=\left\lbrace X \in \mathrm{L}^{1}(\Omega), \quad \nabla X \in \left[\mathrm{L}^{1}(\Omega)\right]^{2}\right\rbrace. $$

Since this space is non-reflexive, we cannot say anything about a bounded minimizing sequence in W 1,1(Ω). To overcome the ill-posedness of this problem, we use the procedure of relaxation. A typical choice of the space that guarantees the compactness results is the space of functions of bounded variation B V(Ω) [18].

3.2.1 3.2.1 The Proprieties of B V(Ω) space

We summarize firstly some of the properties of the space B V(Ω) that we will use in the following theorems. We suppose in the following that Ω is bounded and has a Lipschitz boundary.

( P 1 ) Lower semicontinuity (l.s.c) in B V ( Ω )

Let be a sequence (u n )B V(Ω) such as : \(u_{n} \underset {\mathrm {L}^{1}(\Omega)}{\longrightarrow } u\), then
$$\int_{\Omega}|Du|\leq \underset{n\rightarrow +\infty}{\underline{lim}} \int_{\Omega}|{Du}_{n}|. $$

( P 2 ) The weak* topology in B V ( Ω )

The weak* topology in B V(Ω) noted B Vω is defined such as
$$u_{n}\overset{}{\underset{BV-\omega*} {\rightharpoonup}} u \Longleftrightarrow \left\{\begin{array}{l} \;u_{n} \underset{\mathrm{L}^{1}}{\longrightarrow}u\\ {Du}_{n} \overset{*}{\underset{M} {\rightharpoonup}}Du \end{array}\right., $$
where \({Du}_{n}\overset {*}{\underset {M} {\rightharpoonup }}Du\), signifies
$$\int_{\Omega}\varphi \,{Du}_{n}\longrightarrow\int_{\Omega} \varphi \,Du \qquad\forall \varphi \in \mathcal{C}_{0}(\Omega)^{N}. $$

\(\mathcal {C}^{1}_{0}(\Omega)^{N}\) is the space of continuously differentiable functions with compact support in Ω.

( P 3 ) Compactness results of B V ( Ω )
  • The space B V(Ω) is continuously embedded in L2(Ω) (N = 2 the dimension of the space).

  • Every uniformly bounded sequence (X j ) in B V(Ω) is relatively compact in L p (Ω) for \(1 \leq p < \frac {N}{N-1}, N \geq 1\). Moreover, there exists a subsequence (X jk ) and XB V(Ω) such as \(X_{\textit {jk}}\overset {}{\underset {BV-\omega *} {\rightharpoonup }} X \).

    For more details about the space B V(Ω), see [18, 20].

For the reason that every bounded sequence in W 1,1(Ω) is also bounded in B V(Ω), we use the classical characteristics of the B Vω topology to deduce the existence of a subsequence that converges B Vω. Let us define the relaxed function of the problem (9).

Theorem 3.1.

The relaxed function associated to the problem (9), for the B Vω topology is defined as
$$ \begin{aligned} \overline{F}(X)&= \Vert HX - B \Vert_{1} + \gamma \Vert f(|D X|)\Vert_{1}\\&\quad+ \eta \sum_{i=-p}^{p} \sum_{j=-p}^{p} \alpha^{|i|+|j|} \Vert X-{S^{i}_{x}}{S^{j}_{y}}X \Vert_{1}, \end{aligned} $$
(11)
where D is the distributional gradient; in addition, we have
$$\begin{aligned} \int_{\Omega} f(|D X|) dx&= \int_{\Omega} f(|\nabla X|) dx+c \int_{S_{X}} |X^{+}-X^{-}|d\mathcal{H}\\&\quad+c \int_{\Omega-S_{X}} |C_{X}|, \end{aligned} $$
X + and X are respectively the upper and lower limit as defined in [18].
$$S_{X}=\lbrace x \in \Omega \, : \, X^{-}(x) < X^{+}(x)\rbrace, $$

\(\mathcal {H}\)is the Hansdorff measure and C x the Cantor part.

Proof We define firstly the function F
$${\kern15pt} \begin{aligned} F(X) =\left\{\begin{array}{ll} \Vert HX-B \Vert_{1}+ \gamma \Vert\, f(|\nabla X|)\Vert_{1} \\ + \eta \sum_{i=-p}^{p} \sum_{j=-p}^{p} \alpha^{|i|+|j|} \Vert X-{S^{i}_{x}}{S^{j}_{y}}X \Vert_{1} & \quad \text{if}\, X \in W^{1,1}(\Omega)\\ +\infty & \quad \text{if}\, X \in BV(\Omega)\setminus W^{1,1}(\Omega) \end{array}\right.. \end{aligned} $$
(12)
If XW 1,1(Ω), we have \(F(X)=\overline {F}(X)\). Let us take a sequence \((X_{k})_{k \in \mathbb {N}}\) that converges to X in B V(Ω), from the l.s.c of \(\overline {F}\), we have
$$\overline{F}(X) \leq \liminf_{k \to +\infty}\overline{F}(X_{k}). $$
Since \(\overline {F}(X) \leq F(X)\), we get that
$$ \overline{F}(X) \leq \liminf_{k \to +\infty} F(X_{k}). $$
(13)
To prove the other inequality, we use the theorem in [21]. For each XB V(Ω), (X k )C (Ω)∩W 1,1(Ω) such that
$$X_{k}\overset{}{\underset{BV-\omega*} {\rightharpoonup}} X, $$
Since H:L1(Ω)→L1(Ω) is continuous then
$$ \Vert {HX}_{k}-B \Vert_{1} \longrightarrow \Vert HX-B \Vert_{1}. $$
(14)
Also, the operator \((I-{S^{i}_{x}}{S^{j}_{y}})\) is continuous in L1(Ω) for every i and j such as i+j>0. Then
$$ \begin{aligned} &\sum_{i=-p}^{p}\sum_{j=-p}^{p} \alpha^{|i|+|j|} \left\Vert X_{k}-{S^{i}_{x}}{S^{j}_{y}}X_{k} \right\Vert_{1} \longrightarrow \\& \sum_{i=-p}^{p}\sum_{j=-p}^{p} \alpha^{|i|+|j|} \left\Vert X-{S^{i}_{x}}{S^{j}_{y}}X \right\Vert_{1}. \end{aligned} $$
(15)
Using the continuity of f, we have
$$ f(|D X_{k}|)(\Omega) \longrightarrow f(|D X|)(\Omega). $$
(16)
Using (14), (15) and (16), we deduce that
$$ \liminf_{k \to +\infty} F(X_{k}) \leq \overline{F}(X). $$
(17)
From (13) and (17), we have finally
$$ \overline{F}(X)=\liminf_{k \to +\infty} F(X_{k}). $$
(18)

Let us prove now the existence of the problem (9).

Theorem 3.2.

We assume that the operators \((I-{S^{i}_{x}}{S^{j}_{y}})\) and H defined: L1(Ω)→L1(Ω) are continuous, and in addition, H does not annihilate the constants, in particular (H.1≠0). We keep also all assumptions on f defined above. Then the minimization problem
$$ \underset{X \in BV(\Omega)}{\inf }\overline{F}(X) $$
(19)

admits a solution XB V(Ω).

Proof Let \((X_{k})_{k \in \mathbb {N}}\) be a minimizing sequence for (19), using the assumption on f in (10), we can deduce that there exist c 1,c 2 and c 3 positive constants such as
$$ \Vert {HX}_{k}-B \Vert_{1} \leq c_{1}, $$
(20)
$$ \sum_{j=-p}^{p} \alpha^{|i|+|j|} \Vert X_{k}-{S^{i}_{x}}{S^{j}_{y}}X_{k} \Vert_{1} \leq c_{2}, $$
(21)
and
$$ |D X_{k}|(\Omega) \leq c_{3}. $$
(22)
The inequality (22) says that the total variation is bounded; we have to prove now that X k 1 is also bounded. We use the classical approach proposed in [22]. We construct two sequences such that\(Y_{k}=\frac {1}{\vert \Omega \vert }\int _{\Omega }X_{k} \,dx,\) and Z k =X k Y k , then
$$ \int_{\Omega}Z_{k} \, dx=0, \quad \text{and}\quad {DZ}_{k}={DX}_{k}. $$
(23)
Using the generalized Poincaré-Wirtinger inequality [23], there exists a constant c 4 such that
$$ \Vert Z_{k}\Vert_{\mathrm{L}^{2}(\Omega)} \leq c_{4}\Vert D Z_{k}\Vert(\Omega). $$
(24)
By the inequality (23) and the relation (22), we have
$$ \Vert Z_{k}\Vert_{\mathrm{L}^{2}(\Omega)} \leq c_{4}.c_{3}. $$
(25)
Then
$$ \begin{aligned} \Vert X_{k}\Vert_{\mathrm{L}^{2}(\Omega)} & = \Vert X_{k}-Y_{k}+Y_{k}\Vert_{\mathrm{L}^{2}(\Omega)}\\ &= \Vert Z_{k}+Y_{k}\Vert_{\mathrm{L}^{2}(\Omega)}\\ &\leq \Vert Z_{k}\Vert_{\mathrm{L}^{2}(\Omega)}+\Vert Y_{k}\Vert_{\mathrm{L}^{2}(\Omega)}\\ &\leq c_{4}.c_{3}+\Vert Y_{k}\Vert_{\mathrm{L}^{2}(\Omega)}, \end{aligned} $$
(26)
with \(\Vert Y_{k}\Vert _{\mathrm {L}^{2}(\Omega)}=\vert \int _{\Omega }X_{k} \,dx\vert.\) We have also
$${} {\fontsize{8.6pt}{9.6pt}\selectfont{\begin{aligned} \left\Vert H\left(\frac{1}{\vert \Omega\vert}\int_{\Omega}X_{k} \,dx\right)\right\Vert_{\mathrm{L}^{1}(\Omega)} & \leq \Vert {HY}_{k}-{HX}_{k}\Vert_{\mathrm{L}^{1}(\Omega)}+\Vert {HX}_{k}\\[-5pt]&\quad-B\Vert_{\mathrm{L}^{1}(\Omega)}+\Vert B \Vert_{\mathrm{L}^{1}(\Omega)}\\ &\leq \Vert H\Vert_{\mathrm{L}^{\infty}(\Omega)}\Vert Z_{k}\Vert_{\mathrm{L}^{1}(\Omega)}+c_{1}+\Vert B \Vert_{\mathrm{L}^{1}(\Omega)}\\ &\leq c_{5}\Vert H\Vert_{\mathrm{L}^{\infty}(\Omega)}\Vert Z_{k}\Vert_{\mathrm{L}^{2}(\Omega)}+c_{1}+c_{6}\\ &\leq c_{5}\Vert H\Vert_{\mathrm{L}^{\infty}(\Omega)} c_{4}.c_{3}+c_{1}+c_{6}\\ &\leq C, \end{aligned}}} $$
(27)
where \(C=c_{5}\Vert H\Vert _{\mathrm {L}^{\infty }(\Omega)} c_{4}.c_{3}+c_{1}+c_{6},\) we have finally
$$ \begin{aligned} \Vert H\left(\frac{1}{\vert \Omega\vert}\int_{\Omega}X_{k} \,dx\right)\Vert_{\mathrm{L}^{1}(\Omega)}& = \vert\int_{\Omega}X_{k} \,dx\vert \Vert H.1\Vert_{\mathrm{L}^{1}(\Omega)}\\ &\leq C. \end{aligned} $$
(28)
Since H.1≠0, from (26) and (28), we can deduce that the sequence \((X_{k})_{k \in \mathbb {N}}\) is bounded in L2(Ω) and Ω is bounded so it is also bounded in L1(Ω). Finally, using (22) and (26), we find that \((X_{k})_{k \in \mathbb {N}}\) is bounded in B V(Ω). Using the propriety P 3 3.2.1, there exists a subsequence noted also \((X_{k})_{k \in \mathbb {N}}\) such that
$$X_{k}\overset{}{\underset{BV-\omega*} {\rightharpoonup}} X. $$
Since H is continuous, we have
$$\Vert {HX}_{k}-B \Vert_{1} \longrightarrow \Vert HX-B \Vert_{1}, $$
and
$${} {\fontsize{8.6pt}{9.6pt}\selectfont{\begin{aligned} \sum_{i=-p}^{p}\sum_{j=-p}^{p} \!\alpha^{|i|+|j|} \Vert X_{k}-{S^{i}_{x}}{S^{j}_{y}}X_{k} \Vert_{1} \!\longrightarrow\!\! \sum_{i=-p}^{p}\sum_{j=-p}^{p}\! \alpha^{|i|+|j|} \Vert X-{S^{i}_{x}}{S^{j}_{y}}X \Vert_{1}. \end{aligned}}} $$
Since \(\overline {F}\) is weak l.s.c, we deduce that
$$ \overline{F}(X) \leq \liminf_{k \to +\infty} \overline{F}(X_{k})=\underset{X \in BV(\Omega)}{\inf} \overline{F}(X), $$
(29)

i.e. X is a minimum of \(\overline {F}\).

For the uniqueness, we cannot say anything since the L1 norm is not strictly convex. However, if we replace the norm H X k B1 by \(\Vert {HX}_{k}-B {\Vert _{2}^{2}}\), we can check easily the uniqueness of the solution.

4 Proposed algorithm

In this section, we describe the numerical approach to the minimization problem (9). To discretize this problem, we use the classical approach based on the descritization of its gradient descent partial differential equation (PDE). We can also use the split Bregman algorithm [17] to resolve the problem (9). Using the calculus of variation techniques, the gradient descent PDE associated to the problem (9) is described as
$${} {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} \left\{\begin{array}{l} \;\partial_{t} X= H^{\intercal}\text{sing}(HX-B) +\gamma \text{div} \left(\frac{f^{'}(|\nabla X_{n}|)}{|\nabla X|} \nabla X\right)\\ \qquad \,\,+\eta \sum_{i=-p}^{p} \sum_{j=-p}^{p} \alpha^{|i|+|j|}\left(I-S^{-j}_{y }S^{-i}_{x}\right) \text{sign}\left(\! X-{S^{i}_{x}}{S^{j}_{y}}X\!\right)\!,\\ \nu. \nabla X=0 \qquad \text{on} \quad \partial \Omega. \end{array}\right. \end{aligned}}} $$
The minimizer of the problem (9) is obtained numerically by an explicit finite difference scheme that approximates this PDE. We will denote by X i,j , i,j=1,...N a discrete image and \(M=\mathbb {R}^{N^{2}}\) the set of all discrete image. The operators \({S^{i}_{x}}\) and \({S^{j}_{y}}\) are given in discretization form. In addition, the discretization of the operators and div is given by
$$(\nabla X)^{1}_{i,j}= \left\{\begin{array}{ll} X_{i+1,j}-X_{i,j} &\text{if} \quad i<N\\ 0 &\text{if} \quad i=N \end{array},\right. $$
$$(\nabla X)^{2}_{i,j}= \left\{\begin{array}{ll} X_{i,j+1}-X_{i,j} &\text{if} \quad j<N\\ 0 &\text{if} \quad j=N \end{array},\right. $$
and
$$\left(\text{div}\left(p^{1},p^{2}\right)\right)_{i,j}=\left(\text{div}\left(p^{1},p^{2}\right)\right)_{i,j}^{1}+\left(\text{div}\left(p^{1},p^{2}\right)\right)_{i,j}^{2}, $$
where
$$\left(\text{div}\left(p^{1},p^{2}\right)\right)_{i,j}^{1}= \left\{\begin{array}{ll} p^{1}_{i,j}-p^{1}_{i-1,j} &\text{if} \quad 1<i<N\\ p^{1}_{i,j} &\text{if} \quad i=1\\ 0 &\text{if} \quad i=N \end{array},\right. $$
$$ \left(\text{div}\left(p^{1},p^{2}\right)\right)_{i,j}^{2}= \left\{\begin{array}{ll} p^{2}_{i,j}-p^{2}_{i,j-1} &\text{if} \quad 1<j<N\\ p^{2}_{i,j} &\text{if} \quad j=1\\ -p^{2}_{i,j-1} &\text{if} \quad j=N \end{array},\right. $$
To simplify this problem, we consider the case where f(x)=x, which coincides with the classical TV regularization. The algorithm associated to solve the problem (9) is finally given such as the following:

5 Numerical results

In this section, we evaluate the performance of the proposed algorithm. We construct a synthetic LR image to test our algorithm and compare it with the SR algorithm with TV regularization and robust super-resolution (RSR) algorithms. The peak signal-to-noise ratio (PSNR) is used to measure the quality of our approach. We choose a benchmark of six images (Fig. 1) with a different grey-level histogram.
Fig. 1

af Set of images used in the tests. This is a set of benchmark images used as an original image in the tests

We construct a n=20 input low-resolution frames for each image in Fig. 1, sub-sampling with a decimation factor r=4 and blurring with 5×3 Gaussian blur kernel with a standard deviation equal to 3 for all the tested images. Moreover, we add an additive white Gaussian noise e k arbitrary in each frame with σ=10. The parameters chosen for our algorithm are α=0.5, γ=0.4, η=1 and P=2. There are different choices of the function f that verifies the assumptions above such as the choice taken in the algorithm above, we also choose the so-called hypersurface minimal function defined as
$$ f(x)= \sqrt{1+x^{2}}. $$
(30)
In Table 1, the PSNR values are shown for the six different images in the figure (Fig. 1) with different choices of σ noise. The best value of the PSNR is in italicized number on each row. We can easily deduce that our model is always better than the others, which assures the efficiency of our algorithm.
Table 1

The PSNR table

Image

Method

σ=10

σ=15

σ=20

Lena

SR with TV reg.

27.2222

26.868

26.426

 

RSR

28.07

27.78

27.589

 

Proposed approach

29.0844

28.7012

28.5562

Barbara

SR with TV reg.

26.0826

25.658

25.4893

 

RSR

25.6836

25.1263

25.0369

 

Proposed approach

26.6194

26.2022

26.0014

Bird

SR with TV reg.

33.0900

32.6237

32.254

 

RSR

33.1751

32.8233

32.5865

 

Proposed approach

34.8474

34.5266

34.33

Lake

SR with TV reg.

30.9070

30.25

29.922

 

RSR

30.6298

30.2522

30.0866

 

Proposed approach

31.0437

30.86

30.636

Baboon

SR with TV reg.

26.0667

25.789

25.3244

 

RSR

25.7250

25.388

25.263

 

Proposed approach

27.4975

27.1626

26.92

Peppers

SR with TV reg.

29.9331

29.544

29.1668

 

RSR

30.9049

30.5012

30.278

 

Proposed approach

30.9569

30.68

30.4622

RSR robust super resolution, SR super resolution, TV total variation

In Figs. 2, 3, 4, 5, 6 and 7, we have shown the simulated HR images compared with SR using a TV regularization [8] and the RSR [4] for Fig. 1a–f, respectively. Visually, we can assure that our result suppresses the noise and errors caused by misregistration and point spread function misestimation, even if we observe that the noise is not totally removed. Typically, the execution of the main implemented programme requires an average of 2 15 min on a 3.0 GHz Pentium Quad core computer for 256×256 grey-scale images; for the color and large-size images, we can use the proposed algorithm [24], which use many-core processors to accelerate the proposed method.
Fig. 2

ad Results obtained for image ‘Lena’ using different methods. In this figure, we illustrate the result obtained for the image ‘Lena’ compared with SR algorithm using TV regularization and the RSR

Fig. 3

ad Results obtained for image ‘Barbara’ using different methods. In this figure, we illustrate the result obtained for the image ‘Barbara’ compared with SR algorithm using TV regularization and the RSR

Fig. 4

ad Results obtained for image ‘Bird’ using different methods. In this figure, we illustrate the result obtained for the image ‘Bird’ compared with SR algorithm using TV regularization and the RSR

Fig. 5

ad Results obtained for image ‘Lake’ using different methods. In this figure, we illustrate the result obtained for the image ‘Lake’ compared with SR algorithm using TV regularization and the RSR

Fig. 6

ad Results obtained for image ‘Baboon’ using different methods. In this figure, we illustrate the result obtained for the image ‘Baboon’ compared with SR algorithm using TV regularization and the RSR

Fig. 7

ad Results obtained for image ‘Peppers’ using different methods. In this figure, we illustrate the result obtained for the image ‘Peppers’ compared with SR algorithm using TV regularization and the RSR

6 Conclusions

We propose a new combination of TV and BTV in the space of bounded variation applied in the deblurring step of the robust super-resolution problem. We prove the existence of minimizers using a relaxation technique. Finally, we perform the choice of our model using the PSNR criteria in Section 5.

Declarations

Acknowledgements

We are grateful to the anonymous referee for the corrections and useful suggestions that have improved this article.

Authors’ Affiliations

(1)
Laboratory LAMAI, Faculty of Science and Technology

References

  1. Q Luong, Advanced image and video resolution enhancement techniques. PhD thesis, Faculty of Engineering Ghent University, (2009).Google Scholar
  2. D Hill, J Hajnal, D Hawkes (eds.), Medical Image Registration (CRC, 2001).Google Scholar
  3. H Zhang, Z Yang, Li Zhang, H Shen, Super-resolution reconstruction for multi-angle remote sensing images considering resolution differences. Remote Sens. 6, 637–657 (2014).View ArticleGoogle Scholar
  4. S Farsiu, M Dirk, M Elad, P Milanfar, Fast and robust multiframe super resolution. IEEE Trans. Image Process. 13, 1327–1344 (2004).View ArticleGoogle Scholar
  5. A Laghrib, A Hakim, S Raghay, M EL Rhabi, Robust super resolution of images with non-parametric deformations using an elastic registration. Appl. Math. Sci. 8, 8897–8907 (2014).Google Scholar
  6. E Lee, M Kang, Regularized adaptive high-resolution image reconstruction considering inaccurate subpixel registration. IEEE Trans. Image Process. 12, 806–813 (2003).Google Scholar
  7. V Patanavijit, S Jitapunkul (eds.), A Robust Iterative Multiframe Superresolution Reconstruction Using a Huber Bayesian Approach with Huber–Tikhonov Regularization (International Symposium on Intelligent Signal Processing and Communications, Yonago, Japan, 2006).Google Scholar
  8. M Ng, H Shen, E Lam, L Zhang, A total variation regularization based superresolution reconstruction algorithm for digital video. EURASIP J. Adv. Signal Process, 1–16 (2007). Article ID 74585.Google Scholar
  9. L Rudin, S Osher, E Fatemi, Nonlinear total variation based noise removal algorithms. Physica D. 60, 259–268 (1992).MATHView ArticleGoogle Scholar
  10. X Zenga, L Yangi, A robust multiframe super-resolution algorithm based on half-quadratic estimation with modified BTV regularization. Digital Signal Process. 23, 98–109 (2013).View ArticleGoogle Scholar
  11. P Milanfar, Super-Resolution Imaging (Digital Imaging and Computer Vision) (Taylor and Francis/CRC Press, 2010).Google Scholar
  12. RY Tsai, TS Huang, Multiframe image restoration and registration. Advances in Computer Vision and Image Processing, vol. 1, chap. 7 (JAI Press, Greenwich, Conn, USA, 1984).Google Scholar
  13. S Park, M Park, M Kang, Super-resolution image reconstruction : a technical overview. 20(3), 21–36 (2003).Google Scholar
  14. A Laghrib, A Hakim, S Raghay, ME Rhabi, A robust multi-frame super resolution based on curvature registration and second order variational regularization. Int. J. Tomography Simul. 28, 63–71 (2015).Google Scholar
  15. MK Park, MG Kang, Regularized high-resolution reconstruction considering inaccurate motion information. Opt. Eng. 46, 117004 (2007).View ArticleGoogle Scholar
  16. V Patanavijit, S Jitapunkul, A lorentzian stochastic estimation for a robust iterative multiframe super-resolution reconstruction with Lorentzian–Tikhonov regularization. EURASIP J. Adv. Signal Process, 1–21 (2007).Google Scholar
  17. S Osher, M Burger, D Goldfarb, J Xu, W Yin, An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 4, 460–489 (2005).MATHMathSciNetView ArticleGoogle Scholar
  18. G Aubert, P Kornprobst, Mathematical Problems in Image Processing Partial Differential Equations and the Calculus of Variations. Second Edition (Springer, New York, 2006).Google Scholar
  19. F Demengel, G Demengel, Espaces Fonctionnels. Utilisation dans la Résolution des équations aux Dérivées Partielles, Savoirs actuels (EDP Sciences, 2007).Google Scholar
  20. L Ambrosio, A Compactness Theorem for a New Class of Functions of Bounded Variation (Boll. Un. Mat. Ital., 1989).Google Scholar
  21. F Demengel, R Temam, Convex functions of a measure and applications. Indiana Univ. Math. J. 33, 673–709 (1984).MATHMathSciNetView ArticleGoogle Scholar
  22. L Vese, Problèmes variationnels et edp pour l‘analyse d’images et l‘évolution de courbes. PhD thesis, Université de Nice Sophia-Antipolis, Nov. (1996).Google Scholar
  23. H Brezis, Functional Analysis, Sobolev Spaces and Partial Differential Equations (Springer, New York, 2011).Google Scholar
  24. Y Zhang, J Xu, F Dai, J Zhang, Q Dai, F Wu, Efficient parallel framework for HEVC motion estimation on many-core processors. IEEE Trans. Circ. Syst. Video Technol. 24 (2014).Google Scholar

Copyright

© Laghrib et al. 2015

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.