Sparse Bayesian blind image deconvolution with parameter estimation
- Bruno Amizic^{1}Email author,
- Rafael Molina^{2} and
- Aggelos K Katsaggelos^{1}
https://doi.org/10.1186/1687-5281-2012-20
© Amizic et al.; licensee Springer. 2012
Received: 30 August 2011
Accepted: 4 October 2012
Published: 21 November 2012
Abstract
In this article, we propose a novel blind image deconvolution method developed within the Bayesian framework. We concentrate on the restoration of blurred photographs taken by commercial cameras to show its effectiveness. The proposed method is based on a non-convex l_{ p }quasi norm with 0<p<1 that is used for the image, and a total variation (TV) based prior that is utilized for the blur. Bayesian inference is carried out by utilizing bounds for both the image and blur priors using a majorization-minimization principle. Maximum a posteriori estimates of the unknown image, blur and model parameters are calculated. Experimental results (i.e., restorations of more than 30 blurred photographs) are presented to demonstrate the advantage of the proposed method compared to existing ones.
Keywords
1 Introduction
Blind image deconvolution (BID) refers to the process of estimating both the original image and the blur from the degraded noisy image observation by using partial information about the imaging system. Blind image deconvolution algorithms represent a valuable tool that can be used for improving image quality without requiring complicated calibrations of the real-time image acquisition and processing system (i.e., medical imaging, video-conferencing, space exploration, x-ray imaging, etc.).
The blind image deconvolution problem is encountered in many different technical areas, such as astronomical imaging, remote sensing, microscopy, medical imaging, optics, super-resolution applications, and motion tracking applications among others (see, for example, [1–9]). Astronomical imaging is one of the primary applications of blind image deconvolution algorithms [1, 2]. Ground based imaging systems are subject to blurring due to the rapidly changing index of refractions of the atmosphere. Extraterrestrial observations of the Earth and the planets are degraded by motion blur as a result of slow camera shutter speeds relative to the rapid spacecraft motion. Blind image deconvolution is used for improving the quality of the Poisson distributed film grain noise present in the blurred X-rays, mammograms, and digital angiographic images. In such applications, many times, the blurring is unavoidable because the medical imaging systems limit the intensity (e.g., low X-ray intensity) of the incident radiation in order to protect the patient’s health [10]. In optics, blind image deconvolution is used to restore the original image from the degradation introduced by a microscope or any other optical instrument [6, 7]. The Hubble Space Telescope main mirror imperfections have provided an inordinate amount of images for the digital image processing community [1]. As a final example, in tracking applications the object being tracked might be blurred due to its speed or the motion of the camera. As a result, the track is lost with conventional tracking approaches and the application of blind restoration approaches can improve tracking results [9].
where the N×1 vectors x, y, and n represent, respectively, the original image, the available noisy and blurred image, and the observation noise, and H represents the blurring matrix created from the blur point spread function h. The images are assumed to be of size m×n = N, and they are lexicographically ordered into N×1 vectors. Given y, the BID problem calls for finding estimates of x and H using prior knowledge on them.
A number of methods have been proposed to address BID (a recent literature review can be found in [11]). The most recent methods are based on a Bayesian framework, and have addressed the removal of the camera motion from blurred photographs [12–19]. In [12] the unknown image and blur were estimated in a two step process. In the first step the blur is estimated from the blurred photograph by regularizing the image gradients with a mixture of Gaussian distributions and by regularizing the blur with a mixture of exponential distributions. In the second step the image is estimated from the blurred photograph and the estimated blur by utilizing the Richardson-Lucy (RL) algorithm. Finally, the restored image is obtained by performing the histogram equalization (to that of the observed image) on the output of RL. A multi-scale approach is utilized for the algorithm implementation. The multi-scale approach consists of down-sampling the blurred photograph to number of low resolution images, and by utilizing the proposed algorithm iteratively to obtain the blur estimate at each resolution.
Estimating camera motion has also been investigated in [13, 14], where the unknown image and blur were estimated in a simultaneous fashion. Additionally, [14] concentrated on synthetic experiments where the performance of the algorithm was evaluated by the improvement in signal to noise ratio. Also, in [13, 14] the regularization parameters are not automatically estimated but rather heuristically chosen by the user at each iteration in order to yield an unknown image estimate with good visual quality. The major disadvantage of the methods proposed in [13, 14] compared to the method proposed in [12] is the lack of parameter estimation.
In this article, we extend our study in [20] by providing (1) a multi-scale based implementation of the algorithm which improves the quality of the obtained restored images, and (2) a complete comparison with many existing state of the art blind deconvolution methods. The proposed Bayesian algorithm for BID utilizes a variant of the non-convex l_{ p } quasi norm based prior as the unknown image prior and the TV prior as the unknown blur prior. Furthermore, we utilize the Bayesian framework to provide the estimates for all model parameters. Finally, we evaluate the performance of the proposed algorithm and provide comparisons with [12–14, 16, 19] by restoring blurred photographs taken by commercial cameras.
This article is organized as follows. In Section 2 we provide the proposed Bayesian modeling of the BID problem. The Bayesian inference is presented in Section 2. In Section 2, we describe implementation details of the proposed algorithm. Experimental results are provided in Section 2 and conclusions are drawn in Section 2.
2 Bayesian modeling
where β is the precision of the multivariate Gaussian distribution.
where Z_{GG}(α) is the partition function, 0<p<1, α denotes the set {α_{ d }} and d∈D={h,v,hh,vv,hv}. ${\Delta}_{i}^{h}(\mathbf{x})$ and ${\Delta}_{i}^{v}(\mathbf{x})$ correspond to, respectively, the horizontal and vertical first order differences, at pixel i, that is, ${\Delta}_{i}^{h}(\mathbf{x})={x}_{i}-{x}_{l(i)}$ and ${\Delta}_{i}^{v}(\mathbf{x})={x}_{i}-{x}_{a(i)}$, where l(i) and a(i) denote the nearest neighbors of i, to the left and above, respectively. The operators ${\Delta}_{i}^{\mathit{\text{hh}}}(\mathbf{x})$, ${\Delta}_{i}^{\mathit{\text{vv}}}(\mathbf{x})$, ${\Delta}_{i}^{\mathit{\text{hv}}}(\mathbf{x})$ correspond to, respectively, horizontal, vertical and horizontal-vertical second order differences, at pixel i.
In this study, similarly to [15, 21], we utilize a non-convex l_{ p } quasi norm with 0<p<1 since the derivatives of blurry photographs are expected to be sparse. The distributions of the image derivatives often have heavier tails that are better modeled with the non-convex l_{ p } quasi norm prior with 0<p<1 compared to the convex priors modeled with p = 1,2.
where o(d)∈{1,2} denotes the order of the difference operator ${\Delta}_{i}^{d}(\mathbf{x})$.
Note that with this choice of the hyperpriors, the observed image y is made solely responsible for the estimation of the image, blur and hyperparameters.
3 Bayesian inference
As can be seen from (10), obtaining the point estimate that maximizes the posterior distribution p(α β γ x h∣y) is not straightforward since it requires the minimization of a non-convex functional. Maximizing the posterior distribution p(α β γ x h∣y) by iteratively optimizing in one variable while fixing the others (the so called iterated conditional modes (ICM) method [24]) is equivalent to the variational Bayesian based maximization (see [25] for an example derivation) for the special case when all the posterior distributions are assumed to be degenerate.
The majorization-minimization approach has been utilized in several approaches for image restoration [25, 26].
where Z is a matrix with elements z_{d,i}, with d∈{h,v,hh,vv,hv} and i=1,…,N.
As shown in (20), we are effectively replacing the original non-convex minimization problem (10) by a series of convex ones by utilizing the majorization-minimization criteria and introducing the additional variational vectors z_{ d } and u. By iteratively solving this convex optimization problem in an alternating fashion with respect to all unknowns, we obtain a sequence of point estimates and derive the proposed algorithm as shown next.
3.1 Algorithm
Given α^{1},β^{1},γ^{1}, h^{1}, ${u}_{i}^{1}={[{\Delta}_{i}^{h}({\mathbf{h}}^{1})]}^{2}+{[{\Delta}_{i}^{v}({\mathbf{h}}^{1})]}^{2}$, and ${z}_{d,i}^{1}$.
- 1.Calculate$\begin{array}{ll}{\mathbf{x}}^{k}& =\left[{\beta}^{k}{({\mathbf{H}}^{k})}^{t}({\mathbf{H}}^{k})+{\alpha}^{k}p\right.\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{1em}{0ex}}{\left.\sum _{d}{2}^{1-o(d)}{({\mathit{\Delta}}^{d})}^{t}{\mathbf{W}}_{d}^{k}({\mathit{\Delta}}^{d})\right]}^{-1}{\beta}^{k}{({\mathbf{H}}^{k})}^{t}\mathbf{y},\phantom{\rule{2em}{0ex}}\end{array}$(21)
- 2.Calculate$\begin{array}{ll}{\mathbf{h}}^{k+1}& =\left[{\beta}^{k}{({\mathbf{X}}^{k})}^{t}({\mathbf{X}}^{k})+{\gamma}^{k}\right.\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{1em}{0ex}}{\left.\sum _{d\in \{h,v\}}{({\mathit{\Delta}}^{d})}^{t}{\mathbf{U}}^{k}({\mathit{\Delta}}^{d})\right]}^{-1}{\beta}^{k}{({\mathbf{X}}^{k})}^{t}\mathbf{y},\phantom{\rule{2em}{0ex}}\end{array}$(22)
- 3.For each d∈{h,v,hh,vv,hv} calculate$\begin{array}{ll}{z}_{d,i}^{k+1}& ={[{\Delta}_{i}^{d}({\mathbf{x}}^{k})]}^{2},\phantom{\rule{2em}{0ex}}\end{array}$(23)
- 4.Calculate$\begin{array}{ll}{u}_{i}^{k+1}& ={[{\Delta}_{i}^{h}({\mathbf{h}}^{k+1})]}^{2}+{[{\Delta}_{i}^{v}({\mathbf{h}}^{k+1})]}^{2},\phantom{\rule{2em}{0ex}}\end{array}$(24)
- 5.Calculate${\alpha}^{k+1}=\frac{{\lambda}_{1}N/p}{{\sum}_{d\in \mathrm{D}}{2}^{1-o(d)}{\sum}_{i}|{\Delta}_{i}^{d}({\mathbf{x}}^{k}){|}^{p}},$(25)${\beta}^{k+1}=\frac{N}{\parallel \mathbf{y}-{\mathbf{H}}^{k+1}{\mathbf{x}}^{k}{\parallel}^{2}},$(26)${\gamma}^{k+1}=\frac{{\lambda}_{2}N}{\text{TV}({\mathbf{h}}^{k+1})},$(27)
In the line of study presented in [21] the parameter p is set to 4/5 (see [21] for a detailed discussion). Additionally, the parameters λ_{1}and λ_{2}are needed to approximate the partition functions for prior distributions p(x|α) and p(h|γ), respectively. Unfortunately, the approximations of partition functions for the distributions p(x|α) and p(h|γ) are necessary since its corresponding partition functions are analytically intractable. We follow the approaches proposed in [22, 23], as already described in the previous section, and determine the values of the parameters λ_{1} and λ_{2} experimentally. The values of the parameters p, λ_{1}, and λ_{2} are therefore set throughout all the experiments that follow. The robustness of the proposed method will be tested and evaluated under various blurring and noisy conditions.
Note that if the blur h and the hyperparameters α β, and γ are assumed to be known, the proposed algorithm coincides with the iteratively re-weighted least squares (IRLS) algorithm presented in [21] (i.e., in this case for both algorithms the image estimate is calculated as shown in (21)). Note, that the l_{ p } quasi norm based prior is also utilized in [15], and that this study simplifies the prior used in [21] by omitting the second order derivatives.
4 Multi-scale implementation
The restoration results presented in [12], and more recently in [27], showed the effectiveness of the multiscale approach in implementing blind image deconvolution algorithms. Furthermore, it is shown in [27] that the multi-scale approach prevents the algorithm from converging to the unit impulse. Alternatively, the authors in [13, 14] introduced heuristic re-weighting of the regularization parameters, at each iteration, to prevent the algorithms from converging to unrealistic blur estimates. In our study, no heuristic adjustment of each parameter is performed but instead we estimate the parameters automatically within the Bayesian framework.
To avoid unrealistic blur estimates we adopt here a multi-scale scheme similar in spirit to the one proposed in [12]. Additionally, the proposed multi-scale approach allows user to automatically initialize the proposed algorithm without visually inspecting the observed blurred image for determining the initial blur estimate. By analyzing the observed blurred image it is possible to come up with more informative initial blur estimates; however in this study our focus is to develop a completely automated algorithm once the blur support is provided.
The basic idea behind the multi-scale approach is to down-sample the observed blurred image to a number of low resolution images. At the lowest resolution the initial blur estimate (i.e., h^{1}) is set to the uniform blur and the lowest resolution of the down-sampled observed image is utilized as the initial image estimate (i.e., x^{1}). After convergence is achieved at each scale we up-sample the image and the blur estimates to the next higher resolution and re-run the proposed algorithm. This iterative process is repeated until the algorithm converges and image and blur estimates are obtained at their native resolutions. The detailed pseudocode used in the implementation of our multi-scale approach is shown in Appendix.
5 Experimental results
In this section, we present the experimental results obtained by the use of the proposed algorithm. As the performance metric, for the experiments in which the original image is known, we utilize the improvement in signal to noise ratio of the restored image (denoted as ${\text{ISNR}}_{\widehat{\mathbf{x}}}$), which is defined as $10{\text{log}}_{10}\left(\parallel \mathbf{x}-\mathbf{y}{\parallel}^{2}/\parallel \mathbf{x}-\widehat{\mathbf{x}}{\parallel}^{2}\right)$, where x, y and $\widehat{\mathbf{x}}$ are the original, observed and estimated images, respectively. Analogously, when the blur is known we utilize the improvement in signal to noise ratio of the restored blur (denoted as ${\text{ISNR}}_{\widehat{\mathbf{h}}}$), which is defined as $10{\text{log}}_{10}\left(\parallel \mathbf{h}-{\mathbf{h}}_{\delta}{\parallel}^{2}/\parallel \mathbf{h}-\widehat{\mathbf{h}}{\parallel}^{2}\right)$, where h, h_{ δ }and $\widehat{\mathbf{h}}$ are the original blur, the unit impulse, and the estimated blur, respectively.
In addition, after the blur support, greater than the original one, is specified by the user, all unknown parameters and the estimates of the unknown image and blur are estimated automatically as described in Sections 2 and 2. Specifying the blur support for the unknown blur is common with the state-of-the-art approaches (see [12–14, 16, 19]).
Similarly to these approaches, the proposed algorithm is very robust when the support of the blur is largely overestimated by the user, as can be seen in all the experiments that follow. Also, for the experiments in which blurred colored images are considered, only the luminance component of the observed image is restored while the observed chroma components are used to obtain the restored colored image once the original luminance is estimated. Finally, the proposed algorithm is terminated when the criterion ∥x^{ k }−x^{k−1}∥/∥x^{k−1}∥<10^{−3} is achieved or the number of iterations reaches 100. After each iteration, we enforce the following constraints on the blur estimates: the positivity (blur elements less than zero are set to zero), the support constraint (blur elements outside of the blur support estimate are set to zero), and the energy conservation (sum of the blur elements equals one).
${\mathbf{\text{ISNR}}}_{\widehat{\mathbf{x}}}$ and ${\text{ISNR}}_{\widehat{\mathbf{h}}}$values, for the cameraman, satellite, shepp-logan, and airplane images degraded by five different motion blurs (BSNR = 40 dB)
Image | Blur 1 | Blur 2 | Blur 3 | Blur 4 | Blur 5 | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
${\mathbf{\text{ISNR}}}_{\widehat{\mathbf{x}}}$ | ${\mathbf{\text{ISNR}}}_{\widehat{\mathbf{h}}}$ | ${\mathbf{\text{ISNR}}}_{\widehat{\mathbf{x}}}$ | ${\mathbf{\text{ISNR}}}_{\widehat{\mathbf{h}}}$ | ${\mathbf{\text{ISNR}}}_{\widehat{\mathbf{x}}}$ | ${\mathbf{\text{ISNR}}}_{\widehat{\mathbf{h}}}$ | ${\mathbf{\text{ISNR}}}_{\widehat{\mathbf{x}}}$ | ${\mathbf{\text{ISNR}}}_{\widehat{\mathbf{h}}}$ | ${\mathbf{\text{ISNR}}}_{\widehat{\mathbf{x}}}$ | ${\mathbf{\text{ISNR}}}_{\widehat{\mathbf{h}}}$ | ||
cameraman | 11.41 | 20.80 | 7.07 | 16.74 | 6.64 | 18.41 | 5.96 | 21.94 | 6.92 | 21.22 | |
satellite | 18.50 | 39.44 | 17.85 | 35.49 | 17.98 | 20.56 | 8.76 | 25.76 | 9.02 | 26.17 | |
shepp-logan | 32.49 | 47.34 | 24.27 | 45.23 | 31.16 | 20.51 | 22.99 | 49.00 | 22.36 | 49.76 | |
airplane-color | 8.10 | 17.55 | 9.34 | 15.67 | 7.99 | 15.78 | 8.50 | 20.84 | 4.83 | 18.32 |
Blur | Method | Image 1 | Image 2 | Image 3 | Image 4 | ||||
---|---|---|---|---|---|---|---|---|---|
${\mathbf{\text{SSE}}}_{\widehat{\mathbf{x}}}$ | ${\mathbf{\text{ER}}}_{\widehat{\mathbf{x}}}$ | ${\mathbf{\text{SSE}}}_{\widehat{\mathbf{x}}}$ | ${\mathbf{\text{ER}}}_{\widehat{\mathbf{x}}}$ | ${\mathbf{\text{SSE}}}_{\widehat{\mathbf{x}}}$ | ${\mathbf{\text{ER}}}_{\widehat{\mathbf{x}}}$ | ${\mathbf{\text{SSE}}}_{\widehat{\mathbf{x}}}$ | ${\mathbf{\text{ER}}}_{\widehat{\mathbf{x}}}$ | ||
Blur 1 | ALG | 29.91 | 0.91 | 43.93 | 0.91 | 24.75 | 0.78 | 43.73 | 1.46 |
Fergus et al. | 39.73 | 1.21 | 59.71 | 1.24 | 39.48 | 1.25 | 44.33 | 1.48 | |
Cho et al. | 33.05 | 1.00 | 64.88 | 1.35 | 26.74 | 0.85 | 31.59 | 1.05 | |
Shan et al. | 49.79 | 1.51 | 100.70 | 2.09 | 45.83 | 1.45 | 53.76 | 1.79 | |
Levin et al. | 44.06 | 1.34 | 63.98 | 1.33 | 38.82 | 1.23 | 78.25 | 2.61 | |
Blur 2 | ALG | 33.47 | 0.89 | 50.00 | 0.97 | 20.59 | 0.57 | 42.18 | 0.96 |
Fergus et al. | 40.70 | 1.08 | 66.12 | 1.29 | 41.55 | 1.15 | 93.14 | 2.11 | |
Cho et al. | 34.13 | 0.91 | 53.42 | 1.04 | 30.18 | 0.83 | 102.24 | 2.32 | |
Shan et al. | 38.87 | 1.03 | 313.48 | 6.11 | 29.74 | 0.82 | 146.73 | 3.33 | |
Levin et al. | 48.50 | 1.29 | 74.05 | 1.44 | 46.73 | 1.29 | 128.82 | 2.92 | |
Blur 3 | ALG | 27.45 | 1.06 | 19.38 | 0.45 | 16.96 | 0.92 | 17.78 | 1.16 |
Fergus et al. | 30.05 | 1.16 | 55.75 | 1.31 | 21.36 | 1.16 | 19.63 | 1.29 | |
Cho et al. | 31.41 | 1.21 | 29.17 | 0.68 | 17.56 | 0.95 | 19.55 | 1.28 | |
Shan et al. | 28.12 | 1.08 | 53.48 | 1.25 | 19.20 | 1.04 | 16.33 | 1.07 | |
Levin et al. | 34.93 | 1.35 | 64.27 | 1.50 | 18.93 | 1.03 | 49.97 | 3.27 | |
Blur 4 | ALG | 51.59 | 1.08 | 93.70 | 1.30 | 28.69 | 0.74 | 44.15 | 1.11 |
Fergus et al. | 125.80 | 2.64 | 112.92 | 1.56 | 81.28 | 2.09 | 11658.02 | 294.05 | |
Cho et al. | 63.73 | 1.34 | 80.16 | 1.11 | 41.34 | 1.06 | 84.32 | 2.13 | |
Shan et al. | 100.43 | 2.11 | 178.28 | 2.47 | 134.77 | 3.46 | 429.12 | 10.82 | |
Levin et al. | 95.81 | 2.01 | 105.20 | 1.45 | 68.15 | 1.75 | 97.73 | 2.46 | |
Blur 5 | ALG | 31.39 | 1.50 | 36.54 | 1.32 | 21.28 | 1.45 | 20.27 | 1.31 |
Fergus et al. | 27.32 | 1.31 | 39.50 | 1.42 | 21.58 | 1.47 | 20.61 | 1.34 | |
Cho et al. | 38.59 | 1.85 | 33.25 | 1.20 | 23.30 | 1.59 | 16.00 | 1.04 | |
Shan et al. | 30.76 | 1.47 | 51.94 | 1.87 | 17.71 | 1.21 | 20.85 | 1.35 | |
Levin et al. | 26.50 | 1.27 | 35.98 | 1.29 | 17.43 | 1.19 | 34.01 | 2.20 | |
Blur 6 | ALG | 20.72 | 1.32 | 23.71 | 1.17 | 18.61 | 1.88 | 23.66 | 1.32 |
Fergus et al. | 44.02 | 2.80 | 84.12 | 4.16 | 33.29 | 3.36 | 46.83 | 2.62 | |
Cho et al. | 42.68 | 2.72 | 36.37 | 1.80 | 19.24 | 1.94 | 37.60 | 2.10 | |
Shan et al. | 71.33 | 4.54 | 199.59 | 9.87 | 28.90 | 2.91 | 58.19 | 3.25 | |
Levin et al. | 28.47 | 1.81 | 36.73 | 1.82 | 20.98 | 2.12 | 62.42 | 3.49 | |
Blur 7 | ALG | 38.51 | 1.90 | 61.57 | 1.56 | 20.74 | 1.59 | 64.40 | 4.36 |
Fergus et al. | 206.70 | 10.22 | 152.18 | 3.86 | 137.37 | 10.53 | 501.31 | 33.90 | |
Cho et al. | 43.46 | 2.15 | 48.99 | 1.24 | 26.81 | 2.06 | 31.38 | 2.12 | |
Shan et al. | 252.56 | 12.49 | 250.20 | 6.34 | 230.76 | 17.69 | 300.06 | 20.29 | |
Levin et al. | 45.91 | 2.27 | 64.07 | 1.62 | 27.05 | 2.07 | 97.97 | 6.63 | |
Blur 8 | ALG | 30.07 | 1.16 | 44.67 | 1.10 | 33.23 | 1.43 | 77.69 | 3.37 |
Fergus et al. | 49.42 | 1.91 | 89.65 | 2.20 | 51.95 | 2.24 | 781.10 | 33.88 | |
Cho et al. | 45.48 | 1.76 | 73.35 | 1.80 | 47.10 | 2.03 | 41.63 | 1.81 | |
Shan et al. | 158.72 | 6.14 | 106.66 | 2.62 | 202.18 | 8.73 | 287.21 | 12.46 | |
Levin et al. | 48.19 | 1.86 | 71.32 | 1.75 | 31.20 | 1.35 | 112.69 | 4.89 |
6 Conclusions
In this article, a novel blind image deconvolution algorithm is presented. The proposed algorithm was developed within a Bayesian framework utilizing a non-convex l_{ p } quasi norm based sparse prior on the image, and a total-variation prior on the unknown blur. The proposed algorithm is completely automated once the blur support is provided. Experimental results demonstrate that using sparse priors and the proposed parameter estimation, both the unknown image and blur can be estimated with very high accuracy. Furthermore, numerous restorations of photographs taken by commercial cameras are provided to demonstrate the robustness and effectiveness of the proposed approach. Finally, it was shown that the performance of the proposed algorithm is competitive to existing state-of-the-art blind image deconvolution algorithms.
Appendix
Declarations
Acknowledgements
This work was supported in part by the Department of Energy under contract DE-NA0000457 and the “Ministerio de Ciencia e Innovación” under contract TIN2010-15137.
Authors’ Affiliations
References
- Krist J: Simulation of HST PSFs Using Tiny Tim. In Astronomical Data Analysis Software and Systems IV. Edited by: Shaw byRA, Payne HE, Hayes JJE. Astronomical Society of the Pacific, San Francisco, USA; 1995:349-353.Google Scholar
- Schultz TJ: Multiframe blind deconvolution of astronomical images. J. Opt. Soc. Am. A 1993, 10: 1064-1073. 10.1364/JOSAA.10.001064View ArticleGoogle Scholar
- Bretschneider T, Bones P, McNeill S, Pairman D: Image-based quality assessment of SPOT data. Proceedings of the American Society for Photogrammetry & Remote Sensing 2001. [Unpaginated CD-ROM]Google Scholar
- Gibson FS, Lanni F: Experimental test of an analytical model of aberration in an oil-immersion objective lens used in three-dimensional light microscopy. J. Opt. Soc. Am. A 1991, 8: 1601-1613. 10.1364/JOSAA.8.001601View ArticleGoogle Scholar
- Michailovich O, Adam D: A novel approach to the 2-D blind deconvolution problem in medical ultrasound. IEEE Trans. Med. Imaging 2005, 24: 86-104.View ArticleGoogle Scholar
- Roggemann M: Limited degree-of-freedom adaptive optics and image reconstruction. Appl. Opt 1991, 30: 4227-4233. 10.1364/AO.30.004227View ArticleGoogle Scholar
- Nisenson P, Barakat R: Partial atmospheric correction with adaptive optics. J. Opt. Soc. Am. A 1991, 4: 2249-2253.View ArticleGoogle Scholar
- RMCA Segall AK: Katsaggelos, High-resolution images from low-resolution compressed video. IEEE Signal Process. Mag 2003, 20(3):37-48. 10.1109/MSP.2003.1203208View ArticleGoogle Scholar
- Dai S, Yang M, Wu Y, Katsaggelos AK: Tracking motion-blurred targets in video. Proc. IEEE Int Image Processing Conf 2006, 2389-2392.Google Scholar
- Faulkner CJKK, Louka M: Veiling glare deconvolution of images produced by x-ray image intensifiers. Third Int. Conf. on Image Proc. and Its Applications 1989, 669-673.Google Scholar
- Bishop TE, Babacan SD, Amizic B, Katsaggelos AK, Chan T, Molina R: Blind Image Deconvolution: Problem Formulation and Existing Approaches. CRC Press; 2007.Google Scholar
- Fergus R, Singh B, Hertzmann A, Roweis ST, Freeman WT: Removing camera shake from a single photograph. ACM Trans Graph 2006, 25(3):787-794. 10.1145/1141911.1141956View ArticleGoogle Scholar
- Shan Q, Jia J, Agarwala A: High-quality motion deblurring from a single image. In SIGGRAPH ’08: ACM SIGGRAPH 2008 papers. ACM, New York; 2008:1-10.View ArticleGoogle Scholar
- Almeida M, Almeida L: Blind and semi-blind deblurring of natural images. IEEE Trans. Image Process 2010, 19: 36-52.MathSciNetView ArticleGoogle Scholar
- Krishnan D, Fergus R: Fast image deconvolution using hyper-Laplacian priors. In Advances in Neural Information Processing Systems 22 Edited by: Bengio Y, Schuurmans D, Lafferty J, Williams CKI, Culotta A. 2009, 1033-1041.Google Scholar
- Cho S, Lee S: Fast motion deblurring. ACM Trans Graph. (SIGGRAPH ASIA 2009) 28(5): (Article No. 145, 2009)Google Scholar
- Cho TS, Joshi N, Zitnick CL, Kang SB, Szeliski R, Freeman WT: A content-aware image prior. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2010, 169-176.Google Scholar
- Hou T, Wang S, Qin H: Image deconvolution with multi-stage convex relaxation and its perceptual evaluation. IEEE Trans. Image Process 2011, 20(12):3383-3392.MathSciNetView ArticleGoogle Scholar
- Levin A, Weiss Y, Durand F, Freeman W: Efficient marginal likelihood optimization in blind deconvolution. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2011, 2657-2664.Google Scholar
- Amizic B, Babacan SD, Molina R, Katsaggelos AK: Sparse Bayesian Blind Image Deconvolution with Parameter Estimation. In European Signal Processing Conference Eusipco. Aalborg, Denmark; 2010:626-630.Google Scholar
- Levin A, Fergus R, Durand F, Freeman WT: Image and depth from a conventional camera with a coded aperture. In SIGGRAPH ’07: ACM SIGGRAPH 2007 papers. ACM, New York; 2007:70-70.View ArticleGoogle Scholar
- Mohammad-Djafari A: A full bayesian approach for inverse problems. In Maximum Entropy and Bayesian Methods. Kluwer Academic Publishers; 1996:135-143.View ArticleGoogle Scholar
- Bioucas-Dias J, Figueiredo M, Oliveira J: Adaptive total-variation image deconvolution: a majorization-minimization approach. In Proceedings of EUSIPCO’2006. Florence, Italy; 2006-2006.Google Scholar
- Besag J: On the statistical analysis of dirty pictures. J. Royal Stat. Soc. Series B (Methodological) 1986, 48(3):259-302.MATHMathSciNetGoogle Scholar
- Babacan S, Molina R, Katsaggelos A: Parameter estimation in TV image restoration using variational distribution approximation. IEEE Trans. Image Process 2008, 17(3):326-339.MathSciNetView ArticleGoogle Scholar
- Bioucas-Dias J, Figueiredo M, Oliveira J: Total-variation image deconvolution: a majorization-minimization approach. ICASSP 2006, II-II.Google Scholar
- Levin A, Weiss Y, Durand F, Freeman WT: Understanding and evaluating blind deconvolution algorithms. Proc. IEEE Conf. Computer Vision and Pattern Recognition CVPR 2009, 1964-1971.Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.