Skip to main content

Sparse Bayesian blind image deconvolution with parameter estimation

Abstract

In this article, we propose a novel blind image deconvolution method developed within the Bayesian framework. We concentrate on the restoration of blurred photographs taken by commercial cameras to show its effectiveness. The proposed method is based on a non-convex l p quasi norm with 0<p<1 that is used for the image, and a total variation (TV) based prior that is utilized for the blur. Bayesian inference is carried out by utilizing bounds for both the image and blur priors using a majorization-minimization principle. Maximum a posteriori estimates of the unknown image, blur and model parameters are calculated. Experimental results (i.e., restorations of more than 30 blurred photographs) are presented to demonstrate the advantage of the proposed method compared to existing ones.

1 Introduction

Blind image deconvolution (BID) refers to the process of estimating both the original image and the blur from the degraded noisy image observation by using partial information about the imaging system. Blind image deconvolution algorithms represent a valuable tool that can be used for improving image quality without requiring complicated calibrations of the real-time image acquisition and processing system (i.e., medical imaging, video-conferencing, space exploration, x-ray imaging, etc.).

The blind image deconvolution problem is encountered in many different technical areas, such as astronomical imaging, remote sensing, microscopy, medical imaging, optics, super-resolution applications, and motion tracking applications among others (see, for example, [1ā€“9]). Astronomical imaging is one of the primary applications of blind image deconvolution algorithms [1, 2]. Ground based imaging systems are subject to blurring due to the rapidly changing index of refractions of the atmosphere. Extraterrestrial observations of the Earth and the planets are degraded by motion blur as a result of slow camera shutter speeds relative to the rapid spacecraft motion. Blind image deconvolution is used for improving the quality of the Poisson distributed film grain noise present in the blurred X-rays, mammograms, and digital angiographic images. In such applications, many times, the blurring is unavoidable because the medical imaging systems limit the intensity (e.g., low X-ray intensity) of the incident radiation in order to protect the patientā€™s health [10]. In optics, blind image deconvolution is used to restore the original image from the degradation introduced by a microscope or any other optical instrument [6, 7]. The Hubble Space Telescope main mirror imperfections have provided an inordinate amount of images for the digital image processing community [1]. As a final example, in tracking applications the object being tracked might be blurred due to its speed or the motion of the camera. As a result, the track is lost with conventional tracking approaches and the application of blind restoration approaches can improve tracking results [9].

The standard formulation of the gray-scale image degradation model is given in matrix-vector form by

y=Hx+n,
(1)

where the NƗ1 vectors x, y, and n represent, respectively, the original image, the available noisy and blurred image, and the observation noise, and H represents the blurring matrix created from the blur point spread function h. The images are assumed to be of size mƗnā€‰=ā€‰N, and they are lexicographically ordered into NƗ1 vectors. Given y, the BID problem calls for finding estimates of x and H using prior knowledge on them.

A number of methods have been proposed to address BID (a recent literature review can be found in [11]). The most recent methods are based on a Bayesian framework, and have addressed the removal of the camera motion from blurred photographs [12ā€“19]. In [12] the unknown image and blur were estimated in a two step process. In the first step the blur is estimated from the blurred photograph by regularizing the image gradients with a mixture of Gaussian distributions and by regularizing the blur with a mixture of exponential distributions. In the second step the image is estimated from the blurred photograph and the estimated blur by utilizing the Richardson-Lucy (RL) algorithm. Finally, the restored image is obtained by performing the histogram equalization (to that of the observed image) on the output of RL. A multi-scale approach is utilized for the algorithm implementation. The multi-scale approach consists of down-sampling the blurred photograph to number of low resolution images, and by utilizing the proposed algorithm iteratively to obtain the blur estimate at each resolution.

Estimating camera motion has also been investigated in [13, 14], where the unknown image and blur were estimated in a simultaneous fashion. Additionally, [14] concentrated on synthetic experiments where the performance of the algorithm was evaluated by the improvement in signal to noise ratio. Also, in [13, 14] the regularization parameters are not automatically estimated but rather heuristically chosen by the user at each iteration in order to yield an unknown image estimate with good visual quality. The major disadvantage of the methods proposed in [13, 14] compared to the method proposed in [12] is the lack of parameter estimation.

In this article, we extend our study in [20] by providing (1) a multi-scale based implementation of the algorithm which improves the quality of the obtained restored images, and (2) a complete comparison with many existing state of the art blind deconvolution methods. The proposed Bayesian algorithm for BID utilizes a variant of the non-convex l p quasi norm based prior as the unknown image prior and the TV prior as the unknown blur prior. Furthermore, we utilize the Bayesian framework to provide the estimates for all model parameters. Finally, we evaluate the performance of the proposed algorithm and provide comparisons with [12ā€“14, 16, 19] by restoring blurred photographs taken by commercial cameras.

This article is organized as follows. In Section 2 we provide the proposed Bayesian modeling of the BID problem. The Bayesian inference is presented in Section 2. In Section 2, we describe implementation details of the proposed algorithm. Experimental results are provided in Section 2 and conclusions are drawn in Section 2.

2 Bayesian modeling

As already discussed in the previous section, the observation noise is modeled as a zero mean white Gaussian random vector. Therefore, the observation model is defined as

p(y|Ī²,x,h)āˆ Ī² N / 2 exp āˆ’ Ī² 2 āˆ„ y āˆ’ H x āˆ„ 2 ,
(2)

where Ī² is the precision of the multivariate Gaussian distribution.

As the image prior we utilize a variant of the generalized Gaussian distribution, given by

p(x|Ī±)= 1 Z GG ( Ī± ) exp āˆ’ āˆ‘ d āˆˆ D Ī± d āˆ‘ i | Ī” i d ( x ) | p ,
(3)

where ZGG(Ī±) is the partition function, 0<p<1, Ī± denotes the set {Ī± d } and dāˆˆD={h,v,hh,vv,hv}. Ī” i h (x) and Ī” i v (x) correspond to, respectively, the horizontal and vertical first order differences, at pixel i, that is, Ī” i h (x)= x i āˆ’ x l ( i ) and Ī” i v (x)= x i āˆ’ x a ( i ) , where l(i) and a(i) denote the nearest neighbors of i, to the left and above, respectively. The operators Ī” i hh (x), Ī” i vv (x), Ī” i hv (x) correspond to, respectively, horizontal, vertical and horizontal-vertical second order differences, at pixel i.

In this study, similarly to [15, 21], we utilize a non-convex l p quasi norm with 0<p<1 since the derivatives of blurry photographs are expected to be sparse. The distributions of the image derivatives often have heavier tails that are better modeled with the non-convex l p quasi norm prior with 0<p<1 compared to the convex priors modeled with pā€‰=ā€‰1,2.

For reducing the complexity of the problem we assume that Ī± h ā€‰=ā€‰Ī± v ā€‰=ā€‰Ī± and Ī± hh ā€‰=ā€‰Ī± vv ā€‰=ā€‰Ī± hv ā€‰=ā€‰Ī±/2. Additionally, similarly to [22], the partition function is approximated as Z GG (Ī±)āˆ Ī± āˆ’ Ī» 1 N / p , where Ī»1 is a positive real number. We then simplify (3) accordingly to obtain the following image prior

p(x|Ī±)āˆ Ī± Ī» 1 N / p exp āˆ’ Ī± āˆ‘ d āˆˆ D 2 1 āˆ’ o ( d ) āˆ‘ i = 1 N | Ī” i d ( x ) | p ,
(4)

where o(d)āˆˆ{1,2} denotes the order of the difference operator Ī” i d (x).

For the blur we utilize the total-variation prior given by (see [23] for more details)

p(h|Ī³)āˆ Ī³ Ī» 2 N exp āˆ’ Ī³ TV ( h ) ,
(5)

where Ī»2 is a positive real number and TV(h) is defined as

TV(h)= āˆ‘ i ( Ī” i h ( h ) ) 2 + ( Ī” i v ( h ) ) 2 .
(6)

In this study, we use flat improper hyperpriors on Ī±, Ī² and Ī³, that is, we utilize

p(Ī±)āˆconst,p(Ī²)āˆconst,p(Ī³)āˆconst.
(7)

Note that with this choice of the hyperpriors, the observed image y is made solely responsible for the estimation of the image, blur and hyperparameters.

3 Bayesian inference

Bayesian inference on the unknown components of the blind image deconvolution problem is based on the estimation of the unknown posterior distribution p(Ī±,Ī²,Ī³,x,hāˆ£y), given by

p(Ī±,Ī²,Ī³,x,hāˆ£y)= p ( Ī± , Ī² , Ī³ , x , h , y ) p ( y ) .
(8)

Assuming that x and h are independent, the joint distribution p(Ī±,Ī²,Ī³,x,h,y) can be factorized in terms of the observation model p(y|Ī²,x,h), the prior distributions p(x|Ī±) and p(h|Ī³), and the hyperparameter distributions p(Ī±), p(Ī²) and p(Ī³), that is,

p(Ī±,Ī²,Ī³,x,h,y)=p(y|Ī²,x,h)p(x|Ī±)p(h|Ī³)p(Ī±)p(Ī²)p(Ī³).
(9)

In this study, we adopt the maximum a posteriori (MAP) approach to obtain a single point estimate, Ī˜ Ģ„ =( Ī± Ģ„ , Ī² Ģ„ , Ī³ Ģ„ , x Ģ„ , h Ģ„ ), that maximizes p(Ī±,Ī²,Ī³,x,hāˆ£y) as follows,

Ī˜ Ģ„ = argmax Ī˜ p ( Ī± , Ī² , Ī³ , x , h āˆ£ y ) = mi n Ī˜ Ī² 2 āˆ„ y āˆ’ H x āˆ„ 2 + Ī± āˆ‘ d āˆˆ D 2 1 āˆ’ o ( d ) āˆ‘ i | Ī” i d ( x ) | p + Ī³ TV ( h ) āˆ’ Ī» 1 N p log Ī± āˆ’ N 2 log Ī² āˆ’ Ī» 2 N log Ī³ .
(10)

As can be seen from (10), obtaining the point estimate that maximizes the posterior distribution p(Ī± Ī² Ī³ x hāˆ£y) is not straightforward since it requires the minimization of a non-convex functional. Maximizing the posterior distribution p(Ī± Ī² Ī³ x hāˆ£y) by iteratively optimizing in one variable while fixing the others (the so called iterated conditional modes (ICM) method [24]) is equivalent to the variational Bayesian based maximization (see [25] for an example derivation) for the special case when all the posterior distributions are assumed to be degenerate.

In this article, we apply the majorization-minimization approach twice to bound the non-convex functional to be minimized. We start by bounding the non-convex image prior p(x|Ī±) by the functional M1(Ī±,x,Z), that is

p(x|Ī±)ā‰„constĀ· M 1 (Ī±,x,Z).
(11)

The majorization-minimization approach has been utilized in several approaches for image restoration [25, 26].

The functional M1(Ī±,x,Z) is derived by considering the relationship between the weighted geometric and arithmetic means, which is given by

t p / 2 z 1 āˆ’ p / 2 ā‰¤ p 2 t+ 1 āˆ’ p 2 z,
(12)

where tā‰„0, z>0, and 0<p<2. We first rewrite (12) as

t p / 2 ā‰¤ p 2 t + 2 āˆ’ p p z z 1 āˆ’ p / 2 .
(13)

Using (13) we obtain

| Ī” i d (x) | p ā‰¤ p 2 [ Ī” i d ( x ) ] 2 + 2 āˆ’ p p z d , i z d , i 1 āˆ’ p / 2 .
(14)

Therefore, we have

p ( x | Ī± ) = const Ā· Ī± Ī» 1 N / p exp āˆ’ Ī± āˆ‘ d āˆˆ D 2 1 āˆ’ o ( d ) āˆ‘ i | Ī” i d ( x ) | p ā‰„ const Ā· Ī± Ī» 1 N / p exp āˆ’ Ī±p 2 āˆ‘ d āˆˆ D 2 1 āˆ’ o ( d ) āˆ‘ i [ Ī” i d ( x ) ] 2 + 2 āˆ’ p p z d , i z d , i 1 āˆ’ p / 2 .
(15)

Then (11) holds by setting

M 1 ( Ī± , x , Z ) = Ī± Ī» 1 N / p exp āˆ’ Ī±p 2 āˆ‘ d āˆˆ D 2 1 āˆ’ o ( d ) āˆ‘ i [ Ī” i d ( x ) ] 2 + 2 āˆ’ p p z d , i z d , i 1 āˆ’ p / 2 ,
(16)

where Z is a matrix with elements zd,i, with dāˆˆ{h,v,hh,vv,hv} and i=1,ā€¦,N.

Similarly, the majorization-minimization criterion is used to bound the blur prior p(h|Ī³) utilizing the functional M2(Ī³,h,u). Let us define, for Ī³ and any N-dimensional vector uāˆˆ(R+ )N, with components u i , i=1,ā€¦,N, the following functional

M 2 (Ī³,h,u)= Ī± Ī» 2 N exp āˆ’ Ī³ 2 āˆ‘ i ( Ī” i h ( h ) ) 2 + ( Ī” i v ( h ) ) 2 + u i u i .
(17)

Using the inequality in (13) with p=1, for tā‰„0 and z>0, that is,

t ā‰¤ z + 1 2 z (tāˆ’z),
(18)

we obtain

p(h|Ī³)ā‰„constĀ· M 2 (Ī³,h,u).
(19)

The lower bounds of p(x|Ī±) and p(h|Ī³) defined above lead to the following lower bound of the distribution p(Ī±,Ī²,Ī³,x,h,y)

p ( Ī± , Ī² , Ī³ , x , h , y ) = p ( Ī± ) p ( Ī² ) p ( Ī³ ) p ( x | Ī± ) p ( h | Ī³ ) p ( y | Ī² , x , h ) ā‰„ const Ā· p ( Ī± ) p ( Ī² ) p ( Ī³ ) M 1 ( Ī± , x , V ) M 2 ( Ī³ , h , u ) p ( y | Ī² , x , h ) .

Therefore, a single point estimate that maximizes the lower bound of the posterior distribution p(Ī±,Ī²,Ī³,x,hāˆ£y) is found as follows

Ī˜ Ģ„ = min Ī˜ Ī² 2 āˆ„ y āˆ’ H x āˆ„ 2 + Ī±p 2 āˆ‘ d āˆˆ D 2 1 āˆ’ o ( d ) āˆ‘ i [ Ī” i d ( x ) ] 2 + 2 āˆ’ p p z d , i z d , i 1 āˆ’ p / 2 + + Ī³ 2 āˆ‘ i ( Ī” i h ( h ) ) 2 + ( Ī” i v ( h ) ) 2 + u i u i āˆ’ Ī» 1 N p log Ī± āˆ’ N 2 log Ī² āˆ’ Ī» 2 N log Ī³ .
(20)

As shown in (20), we are effectively replacing the original non-convex minimization problem (10) by a series of convex ones by utilizing the majorization-minimization criteria and introducing the additional variational vectors z d and u. By iteratively solving this convex optimization problem in an alternating fashion with respect to all unknowns, we obtain a sequence of point estimates and derive the proposed algorithm as shown next.

3.1 Algorithm

Given Ī±1,Ī²1,Ī³1, h1, u i 1 = [ Ī” i h ( h 1 ) ] 2 + [ Ī” i v ( h 1 ) ] 2 , and z d , i 1 .

for k=1,2,ā€¦ until a stopping criterion is met:

  1. 1.

    Calculate

    x k = Ī² k ( H k ) t ( H k ) + Ī± k p āˆ‘ d 2 1 āˆ’ o ( d ) ( Ī” d ) t W d k ( Ī” d ) āˆ’ 1 Ī² k ( H k ) t y ,
    (21)

where W d k is a diagonal matrix with entries W d k (i,i)= ( z d , i k ) p / 2 āˆ’ 1 .

  1. 2.

    Calculate

    h k + 1 = Ī² k ( X k ) t ( X k ) + Ī³ k āˆ‘ d āˆˆ { h , v } ( Ī” d ) t U k ( Ī” d ) āˆ’ 1 Ī² k ( X k ) t y ,
    (22)

where Uk is a diagonal matrix with entries U k (i,i)= ( u i k ) āˆ’ 1 / 2 .

  1. 3.

    For each dāˆˆ{h,v,hh,vv,hv} calculate

    z d , i k + 1 = [ Ī” i d ( x k ) ] 2 ,
    (23)
  2. 4.

    Calculate

    u i k + 1 = [ Ī” i h ( h k + 1 ) ] 2 + [ Ī” i v ( h k + 1 ) ] 2 ,
    (24)
  3. 5.

    Calculate

    Ī± k + 1 = Ī» 1 N / p āˆ‘ d āˆˆ D 2 1 āˆ’ o ( d ) āˆ‘ i | Ī” i d ( x k ) | p ,
    (25)
    Ī² k + 1 = N āˆ„ y āˆ’ H k + 1 x k āˆ„ 2 ,
    (26)
    Ī³ k + 1 = Ī» 2 N TV ( h k + 1 ) ,
    (27)

In the line of study presented in [21] the parameter p is set to 4/5 (see [21] for a detailed discussion). Additionally, the parameters Ī»1and Ī»2are needed to approximate the partition functions for prior distributions p(x|Ī±) and p(h|Ī³), respectively. Unfortunately, the approximations of partition functions for the distributions p(x|Ī±) and p(h|Ī³) are necessary since its corresponding partition functions are analytically intractable. We follow the approaches proposed in [22, 23], as already described in the previous section, and determine the values of the parameters Ī»1 and Ī»2 experimentally. The values of the parameters p, Ī»1, and Ī»2 are therefore set throughout all the experiments that follow. The robustness of the proposed method will be tested and evaluated under various blurring and noisy conditions.

Note that if the blur h and the hyperparameters Ī± Ī², and Ī³ are assumed to be known, the proposed algorithm coincides with the iteratively re-weighted least squares (IRLS) algorithm presented in [21] (i.e., in this case for both algorithms the image estimate is calculated as shown in (21)). Note, that the l p quasi norm based prior is also utilized in [15], and that this study simplifies the prior used in [21] by omitting the second order derivatives.

4 Multi-scale implementation

The restoration results presented in [12], and more recently in [27], showed the effectiveness of the multiscale approach in implementing blind image deconvolution algorithms. Furthermore, it is shown in [27] that the multi-scale approach prevents the algorithm from converging to the unit impulse. Alternatively, the authors in [13, 14] introduced heuristic re-weighting of the regularization parameters, at each iteration, to prevent the algorithms from converging to unrealistic blur estimates. In our study, no heuristic adjustment of each parameter is performed but instead we estimate the parameters automatically within the Bayesian framework.

To avoid unrealistic blur estimates we adopt here a multi-scale scheme similar in spirit to the one proposed in [12]. Additionally, the proposed multi-scale approach allows user to automatically initialize the proposed algorithm without visually inspecting the observed blurred image for determining the initial blur estimate. By analyzing the observed blurred image it is possible to come up with more informative initial blur estimates; however in this study our focus is to develop a completely automated algorithm once the blur support is provided.

The basic idea behind the multi-scale approach is to down-sample the observed blurred image to a number of low resolution images. At the lowest resolution the initial blur estimate (i.e., h1) is set to the uniform blur and the lowest resolution of the down-sampled observed image is utilized as the initial image estimate (i.e., x1). After convergence is achieved at each scale we up-sample the image and the blur estimates to the next higher resolution and re-run the proposed algorithm. This iterative process is repeated until the algorithm converges and image and blur estimates are obtained at their native resolutions. The detailed pseudocode used in the implementation of our multi-scale approach is shown in Appendix.

5 Experimental results

In this section, we present the experimental results obtained by the use of the proposed algorithm. As the performance metric, for the experiments in which the original image is known, we utilize the improvement in signal to noise ratio of the restored image (denoted as ISNR x Ģ‚ ), which is defined as 10 log 10 āˆ„ x āˆ’ y āˆ„ 2 / āˆ„ x āˆ’ x Ģ‚ āˆ„ 2 , where x, y and x Ģ‚ are the original, observed and estimated images, respectively. Analogously, when the blur is known we utilize the improvement in signal to noise ratio of the restored blur (denoted as ISNR h Ģ‚ ), which is defined as 10 log 10 āˆ„ h āˆ’ h Ī“ āˆ„ 2 / āˆ„ h āˆ’ h Ģ‚ āˆ„ 2 , where h, h Ī“ and h Ģ‚ are the original blur, the unit impulse, and the estimated blur, respectively.

In addition, after the blur support, greater than the original one, is specified by the user, all unknown parameters and the estimates of the unknown image and blur are estimated automatically as described in Sections 2 and 2. Specifying the blur support for the unknown blur is common with the state-of-the-art approaches (see [12ā€“14, 16, 19]).

Similarly to these approaches, the proposed algorithm is very robust when the support of the blur is largely overestimated by the user, as can be seen in all the experiments that follow. Also, for the experiments in which blurred colored images are considered, only the luminance component of the observed image is restored while the observed chroma components are used to obtain the restored colored image once the original luminance is estimated. Finally, the proposed algorithm is terminated when the criterion āˆ„xkāˆ’xkāˆ’1āˆ„/āˆ„xkāˆ’1āˆ„<10āˆ’3 is achieved or the number of iterations reaches 100. After each iteration, we enforce the following constraints on the blur estimates: the positivity (blur elements less than zero are set to zero), the support constraint (blur elements outside of the blur support estimate are set to zero), and the energy conservation (sum of the blur elements equals one).

In the first set of experiments, we evaluate the performance of the proposed method on four standard images (cameraman, satellite, shepp-logan phantom and airplane) which are widely used in image restoration experiments. The original images are then blurred with five different motion blurs, which are shown in Figure 1. Realizations of white Gaussian noise are added to the respective blurred images in order to obtain degraded images with the blurred signal to noise (BSNR) ratio of 40 dB. The blurred signal to noise ratio is defined as follows

BSNR=10 log 10 Var ( H x ) Var ( n ) ,
(28)
Figure 1
figure 1

Five different synthetic non-parametric motion blurs included in the window of size 21Ɨ21 : (a) Blur 1: the support is 14Ɨ12 , (b) Blur 2: the support is 15Ɨ14 , (c) Blur 3: the support is 14Ɨ14 (d) Blur 4: the support is 18Ɨ19, (e) Blur 5: the support is 18Ɨ15 .

where Var(Ā·) denotes the variance of the random sequence. Example restorations obtained by the proposed algorithm are shown in Figure 2. The restoration results in terms of the previously defined ISNR x Ģ‚ and ISNR h Ģ‚ , metrics are shown in Table 1. It can be observed from Table 1 that the proposed algorithm is very robust and it is capable of restoring the blurred images very successfully under various non-parametric motion blurs. Example convergence curves obtained by the proposed algorithm are shown in Figure 3.

Figure 2
figure 2

Example restorations from Table 1 : 1st column represents four different blurred observations, 2nd column represents their respective restorations obtained by the proposed algorithm, 3rd column represents their respective original images, 4th column represents their respective blurs obtained by the proposed algorithm, 5th column represents their respective original blurs.

Figure 3
figure 3

Example convergence curves for the finest scale of the proposed algorithm of the airplane-color restoration for five different synthetic non-parametric motion blurs: (a) Blur 1. (b) Blur 2. (c) Blur 3. (d) Blur 4 and (e) Blur 5.

Table 1 ISNR x Ģ‚ and ISNR h Ģ‚ values, for the cameraman, satellite, shepp-logan, and airplane images degraded by five different motion blurs (BSNR = 40 dB)

In the second set of experiments, we evaluate the performance of the proposed method on a set of 32 blurred test images taken by a commercial camera. The test images are obtained from [27] and they are available online (http://www.wisdom.weizmann.ac.il/~levina/papers/LevinEtalCVPR09Data.zip). The set of 32 blurred images was obtained by taking the original images shown in Figure 4 and by putting them side by side in order to form a calibration image. Once the calibration image was formed, a commercial camera was mounted on a tripod and eight photos of the calibration image were obtained. Note that during the acquisition process, the Z-axis rotation handle was locked in while the X-axis and Y-axis handles were loosened up in order to simulate in-plane camera shake (see [27] for details; resulting blurs are shown in Figure 5). In this study, we consider a comparison with the following methods [12, 13, 16, 19]. For convenience, from now on, the methods proposed in [12, 13, 16], and the best method from [19] will be denoted, respectively, as Fergus et al., Shan et al., Cho et al., and Levin at al., while the proposed algorithm will be denoted as ALG.

Figure 4
figure 4

Four different original images from[27]: (a) Image 1. (b) Image 2. (c) Image 3. (d) Image 4.

Figure 5
figure 5

Eight different non-parametric motion blurs from[27]: (a) Blur 1. (b) Blur 2. (c) Blur 3. (d) Blur 4. (e) Blur 5. (f) Blur 6. (g) Blur 7 and (h) Blur 8.

The restoration results, for the second set of experiments where the original image and blur are both known, in terms of Sum of Squared Errors (i.e., SSE x Ģ‚ =āˆ„xāˆ’ x Ģ‚ āˆ„ 2 ) and the SSE ratio test (i.e., ER x Ģ‚ = SSE x Ģ‚ / SSE x ~ ) defined in [27] are shown in Table 2. Note that x ~ is an image estimation obtained from the non-blind method from [27]. It can be observed from Table 2 that the proposed algorithm is very robust and that it is capable of restoring blurred images taken by a commercial camera very successfully under various non-parametric motion blurs. In addition, the proposed algorithm is very competitive with the state-of-the-art methods. As can be noted in Table 2, the SSE x Ģ‚ metric for the proposed algorithm is respectively, on the average, 427, 89, 6, and 21 smaller than the SSE x Ģ‚ metric for the Fergus at al., Shan at al., Cho at al., and Levin et al. methods. Note that in Fergusā€™ method, the blur estimation is performed separately from the image estimation. In order to understand the differences in the restoration results we provide some additional information. In Figure 6 it can be seen that for 66 % of tested cases the proposed algorithm yields the smallest SSE x Ģ‚ values. In addition, Figure 7 shows that for a number of test cases the proposed algorithm is capable of achieving very small ER x Ģ‚ values. For example, there are 78 % of the cases for which the proposed algorithm has ER x Ģ‚ smaller than 1.5 while at the same time (as the second best) there are 53 % of the test cases for which Cho et al. method achieves such condition. Example restorations and comparison with Fergus at al., Cho at al., Shan at al, and Levin at al. are shown, respectively, in Figures 8, 9, 10, and 11.

Figure 6
figure 6

Comparing the proposed algorithm with the methods proposed in[12, 13, 16, 19]based on the restoration results from Table2: Percentage of cases for which the restored image yields the smallest SSE x Ģ‚ values.

Figure 7
figure 7

Comparing the cumulative histogram of the proposed algorithm with the cumulative histogram of the methods proposed in[12, 13, 16, 19]based on the restoration results from Table2in terms of the ER x Ģ‚ comparison metric.

Figure 8
figure 8

Example restorations from Table2: 1st column represents eight different blurred observations, 2nd column represents their respective original undistorted versions, 3rd column represents their respective restorations obtained by the proposed algorithm, 4th column represents their respective restorations obtained by the method proposed in[12], 5th column represents their respective original blurs, 6th column represents their respective restored blurs obtained by the proposed algorithm, 7th column represents their respective restored blurs obtained by the method proposed in[12].

Figure 9
figure 9

Example restorations from Table2: 1st column represents eight different blurred observations, 2nd column represents their respective original undistorted versions, 3rd column represents their respective restorations obtained by the proposed algorithm, 4th column represents their respective restorations obtained by the method proposed in[16], 5th column represents their respective original blurs, 6th column represents their respective restored blurs obtained by the proposed algorithm, 7th column represents their respective restored blurs obtained by the method proposed in[16].

Figure 10
figure 10

Example restorations from Table2: 1st column represents eight different blurred observations, 2nd column represents their respective original undistorted versions, 3rd column represents their respective restorations obtained by the proposed algorithm, 4th column represents their respective restorations obtained by the method proposed in[13], 5th column represents their respective original blurs, 6th column represents their respective restored blurs obtained by the proposed algorithm, 7th column represents their respective restored blurs obtained by the method proposed in[13].

Figure 11
figure 11

Example restorations from Table2: 1st column represents eight different blurred observations, 2nd column represents their respective original undistorted versions, 3rd column represents their respective restorations obtained by the proposed algorithm, 4th column represents their respective restorations obtained by the method proposed in[19], 5th column represents their respective original blurs, 6th column represents their respective restored blurs obtained by the proposed algorithm, 7th column represents their respective restored blurs obtained by the method proposed in[19].

Table 2 SSE x Ģ‚ and ER x Ģ‚ values, for the images and blurs defined in Figures 4 and 5 , respectively

In the third set of experiments, we compare the performance of the proposed method with the method proposed in [14] by using the same set of blurred photographs as presented in [14]. Note that in the method proposed in [14] the parameters are not estimated but rather they are manually tuned which results in the sequence of numbers for each parameter. As can be seen in Figure 12 the restoration results obtained by the proposed algorithm are very competitive with the method proposed in [14]. It is clear from Figure 12 that the proposed algorithm produces much sharper restoration results with higher visual quality. Since we lack the true knowledge of the scene, the comparison metrics SSE x Ģ‚ and ER x Ģ‚ are undefined for this experiment.

Figure 12
figure 12

Comparing the proposed method with the method proposed in[14]: 1st column represents three different blurred observations, 2nd column represents their respective restorations obtained by the proposed algorithm, 3rd column represents their respective restorations obtained by the method proposed in[14], 4th column represents their respective blurs obtained by the proposed algorithm, 5th column represents their respective blurs obtained by the method proposed in[14].

6 Conclusions

In this article, a novel blind image deconvolution algorithm is presented. The proposed algorithm was developed within a Bayesian framework utilizing a non-convex l p quasi norm based sparse prior on the image, and a total-variation prior on the unknown blur. The proposed algorithm is completely automated once the blur support is provided. Experimental results demonstrate that using sparse priors and the proposed parameter estimation, both the unknown image and blur can be estimated with very high accuracy. Furthermore, numerous restorations of photographs taken by commercial cameras are provided to demonstrate the robustness and effectiveness of the proposed approach. Finally, it was shown that the performance of the proposed algorithm is competitive to existing state-of-the-art blind image deconvolution algorithms.

Appendix

Algorithm Appendix is shown below.

References

  1. Krist J: Simulation of HST PSFs Using Tiny Tim. In Astronomical Data Analysis Software and Systems IV. Edited by: Shaw byRA, Payne HE, Hayes JJE. Astronomical Society of the Pacific, San Francisco, USA; 1995:349-353.

    Google ScholarĀ 

  2. Schultz TJ: Multiframe blind deconvolution of astronomical images. J. Opt. Soc. Am. A 1993, 10: 1064-1073. 10.1364/JOSAA.10.001064

    ArticleĀ  Google ScholarĀ 

  3. Bretschneider T, Bones P, McNeill S, Pairman D: Image-based quality assessment of SPOT data. Proceedings of the American Society for Photogrammetry & Remote Sensing 2001. [Unpaginated CD-ROM]

    Google ScholarĀ 

  4. Gibson FS, Lanni F: Experimental test of an analytical model of aberration in an oil-immersion objective lens used in three-dimensional light microscopy. J. Opt. Soc. Am. A 1991, 8: 1601-1613. 10.1364/JOSAA.8.001601

    ArticleĀ  Google ScholarĀ 

  5. Michailovich O, Adam D: A novel approach to the 2-D blind deconvolution problem in medical ultrasound. IEEE Trans. Med. Imaging 2005, 24: 86-104.

    ArticleĀ  Google ScholarĀ 

  6. Roggemann M: Limited degree-of-freedom adaptive optics and image reconstruction. Appl. Opt 1991, 30: 4227-4233. 10.1364/AO.30.004227

    ArticleĀ  Google ScholarĀ 

  7. Nisenson P, Barakat R: Partial atmospheric correction with adaptive optics. J. Opt. Soc. Am. A 1991, 4: 2249-2253.

    ArticleĀ  Google ScholarĀ 

  8. RMCA Segall AK: Katsaggelos, High-resolution images from low-resolution compressed video. IEEE Signal Process. Mag 2003, 20(3):37-48. 10.1109/MSP.2003.1203208

    ArticleĀ  Google ScholarĀ 

  9. Dai S, Yang M, Wu Y, Katsaggelos AK: Tracking motion-blurred targets in video. Proc. IEEE Int Image Processing Conf 2006, 2389-2392.

    Google ScholarĀ 

  10. Faulkner CJKK, Louka M: Veiling glare deconvolution of images produced by x-ray image intensifiers. Third Int. Conf. on Image Proc. and Its Applications 1989, 669-673.

    Google ScholarĀ 

  11. Bishop TE, Babacan SD, Amizic B, Katsaggelos AK, Chan T, Molina R: Blind Image Deconvolution: Problem Formulation and Existing Approaches. CRC Press; 2007.

    Google ScholarĀ 

  12. Fergus R, Singh B, Hertzmann A, Roweis ST, Freeman WT: Removing camera shake from a single photograph. ACM Trans Graph 2006, 25(3):787-794. 10.1145/1141911.1141956

    ArticleĀ  Google ScholarĀ 

  13. Shan Q, Jia J, Agarwala A: High-quality motion deblurring from a single image. In SIGGRAPH ā€™08: ACM SIGGRAPH 2008 papers. ACM, New York; 2008:1-10.

    ChapterĀ  Google ScholarĀ 

  14. Almeida M, Almeida L: Blind and semi-blind deblurring of natural images. IEEE Trans. Image Process 2010, 19: 36-52.

    ArticleĀ  MathSciNetĀ  Google ScholarĀ 

  15. Krishnan D, Fergus R: Fast image deconvolution using hyper-Laplacian priors. In Advances in Neural Information Processing Systems 22 Edited by: Bengio Y, Schuurmans D, Lafferty J, Williams CKI, Culotta A. 2009, 1033-1041.

    Google ScholarĀ 

  16. Cho S, Lee S: Fast motion deblurring. ACM Trans Graph. (SIGGRAPH ASIA 2009) 28(5): (Article No. 145, 2009)

  17. Cho TS, Joshi N, Zitnick CL, Kang SB, Szeliski R, Freeman WT: A content-aware image prior. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2010, 169-176.

    Google ScholarĀ 

  18. Hou T, Wang S, Qin H: Image deconvolution with multi-stage convex relaxation and its perceptual evaluation. IEEE Trans. Image Process 2011, 20(12):3383-3392.

    ArticleĀ  MathSciNetĀ  Google ScholarĀ 

  19. Levin A, Weiss Y, Durand F, Freeman W: Efficient marginal likelihood optimization in blind deconvolution. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2011, 2657-2664.

    Google ScholarĀ 

  20. Amizic B, Babacan SD, Molina R, Katsaggelos AK: Sparse Bayesian Blind Image Deconvolution with Parameter Estimation. In European Signal Processing Conference Eusipco. Aalborg, Denmark; 2010:626-630.

    Google ScholarĀ 

  21. Levin A, Fergus R, Durand F, Freeman WT: Image and depth from a conventional camera with a coded aperture. In SIGGRAPH ā€™07: ACM SIGGRAPH 2007 papers. ACM, New York; 2007:70-70.

    ChapterĀ  Google ScholarĀ 

  22. Mohammad-Djafari A: A full bayesian approach for inverse problems. In Maximum Entropy and Bayesian Methods. Kluwer Academic Publishers; 1996:135-143.

    ChapterĀ  Google ScholarĀ 

  23. Bioucas-Dias J, Figueiredo M, Oliveira J: Adaptive total-variation image deconvolution: a majorization-minimization approach. In Proceedings of EUSIPCOā€™2006. Florence, Italy; 2006-2006.

  24. Besag J: On the statistical analysis of dirty pictures. J. Royal Stat. Soc. Series B (Methodological) 1986, 48(3):259-302.

    MATHĀ  MathSciNetĀ  Google ScholarĀ 

  25. Babacan S, Molina R, Katsaggelos A: Parameter estimation in TV image restoration using variational distribution approximation. IEEE Trans. Image Process 2008, 17(3):326-339.

    ArticleĀ  MathSciNetĀ  Google ScholarĀ 

  26. Bioucas-Dias J, Figueiredo M, Oliveira J: Total-variation image deconvolution: a majorization-minimization approach. ICASSP 2006, II-II.

    Google ScholarĀ 

  27. Levin A, Weiss Y, Durand F, Freeman WT: Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Computer Vision and Pattern Recognition CVPR 2009, 1964-1971.

    Google ScholarĀ 

Download references

Acknowledgements

This work was supported in part by the Department of Energy under contract DE-NA0000457 and the ā€œMinisterio de Ciencia e InnovaciĆ³nā€ under contract TIN2010-15137.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bruno Amizic.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authorsā€™ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Amizic, B., Molina, R. & Katsaggelos, A.K. Sparse Bayesian blind image deconvolution with parameter estimation. J Image Video Proc 2012, 20 (2012). https://doi.org/10.1186/1687-5281-2012-20

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-5281-2012-20

Keywords