Skip to main content

Reconstruction for block-based compressive sensing of image with reweighted double sparse constraint

Abstract

Block compressive sensing reduces the computational complexity by dividing the image into multiple patches for processing, but the performance of the reconstruction algorithm is decreased. Generally, the reconstruction algorithm improves the quality of reconstructed image by adding various constraints and regularization terms, namely prior information. In this paper, a reweighted double sparse constraint reconstruction model which combines the residual sparsity and ℓ1 regularization term is proposed. The residual sparsity aims to exploit the nonlocal similarity of image patches, and the ℓ1 regularization term is used to utilize the local sparsity of image patches. The resulting model is solved under the frame of split Bregman iteration (SBI). A large number of experiments show that the algorithm in this paper can reconstruct the original image efficiently and is comparable to the current representative compressive sensing reconstruction algorithm.

1 Introduction

Compressive sensing (CS) theory proposed by Candès et al. [1] breaks the limitation of Nyquist sampling theorem, that is, it can recover the original signal at a sampling rate lower than twice the bandwidth if the signal is sparse in some domain. CS theory has been widely used in various fields since its birth, such as nuclear magnetic resonance, image processing, analog to information conversion, compressive radar, etc. [2]

Suppose that xN is a finite length signal and it is sparse or compressible. According to the CS theory [1], rather than observing the original signal x directly, a much lower linear measurements are acquired by the following random linear projection:

$$ y= Hx+n $$
(1)

where yM is the measurement of the signal, HM × N represents the measurement matrix, and n denotes additive noise. Since the value M is much smaller than the value of N, the solution to the Eq. (1) is an ill-posed inverse problem.

In order to solve the problem effectively, prior knowledge is needed, which lead to the following regularization-based optimization problem

$$ \arg {\min}_x\frac{1}{2}{\left\Vert Hx-y\right\Vert}_2^2+\lambda \Psi (x) $$
(2)

where \( {\left\Vert Hx-y\right\Vert}_2^2 \) is data fidelity term, Ψ(x) is a regularization term which have various choices such as ℓ0 \ ℓ1\total variation (TV), and λ is a regularization parameter.

Since natural images are almost always compressible, the application of compression sensing is natural. However, if compressive sensing is applied to the large-sized image directly, the measurement matrix would be very “large,” which leads to huge amounts of memory and further resulting in high computational complexity. Therefore, the block-based compressive sensing (BCS) method [3] is proposed. During the image processing of BCS, the image is divided into multiple image patches and each image patch is operated separately, so that the computational complexity is greatly reduced. Nevertheless, the quality of the reconstructed image is degraded compared to the quality of the image reconstructed by the entire image.

For the prior knowledge has a crucial influence on the performance of the image reconstruction algorithm, designing an effective regularization term is beneficial to make full use of image prior information and further improve the quality of the reconstructed image. The sparsity and nonlocal similarity, which are the most important properties of the images, are utilized to improve the quality of reconstructed images. The sparsity aims to represent the original image with a little nonzero value or approximate zero, more specifically, the sparsity is to organize the original image more sparsely in some domain [4, 5]. Currently, different predetermined transform basis, including discrete cosine transform (DCT), discrete wavelet transform (DWT), and so on, has been used to exploit the sparsity to further derive some reconstruction algorithm, such as smooth prediction Landweber of BCS based on DCT (BCS-SPL-DCT) [6] and BCS based on DWT (BCS-SPL-DWT) [6]. Furthermore, to enrich the texture and structure in recovered images [7, 8], the multi-hypothesis (MH) prediction [9] method which explores the nonlocal similarities was proposed. By sharing the similar idea [10,11,12] where nonlocal similarities are exploited to design the local sparsifying transform, these methods can achieve better recovery performance than the algorithms that were previously designed for BCS without using nonlocal similarities. However, the recovered images still contain some visual artifacts.

For better CS recovery, some researchers used the idea of weighting the signal. Candès [13] proposed a weighted scheme based on the magnitude of signals to get closer to ℓ0 norm, while still using ℓ1 norm in the optimization problems. In a similar manner, Asif et al. [14] adaptively assigned weight values according to the homotopy of signals.

To better reconstruct the original image both in smooth region and edge region, this paper proposes a method that combines the sparsity and nonlocal similarity by the means of reweighting. Firstly, the residual which represents the difference between the target image patch and the similar image patch is calculated, and it should be more sparse. Therefore, residual sparsity was applied to exploit the nonlocal similarity. Secondly, reweighting applied in the sparse constraint is to consider the sparsity difference of the different image patches to enhance the sparsity. Thirdly, we use the frame of SBI to transform the proposed model into several sub problems to solve it effectively.

The main contributions of this paper are as follows.

  1. 1.

    We propose a reconstruction model that combines the residual sparsity and l1 regularization term by the way of reweighting. It utilizes local sparsity and non-local self-similarity simultaneously so that the model can further improve the performance of the reconstruction algorithm.

  2. 2.

    To solve the model of reweighted double sparse constraint, we design an effective scheme based on the split Bregman iteration (SBI) algorithm. Extensive experiments of our method are conducted and compared with other typical algorithm using PSNR and SSIM.

The rest of this paper is organized as follows. Section 2 introduces the reconstruction model. There are four parts in Section 2, including residual model, reweighted sparse representation, weight estimation, and the solution of the proposed method. In Section 3, we present the plenty experiments and the results compared to other representative methods. Section 4 concludes the paper in the end.

2 Proposed method

In this section, we will give a detailed description of the proposed reweighted double sparse model. Firstly, to further control the computational complexity and memory requirement, we utilize the BCS [3] to divide the original image x into non-overlapped patches represented by pk, 1 ≤ k ≤ P of size B2. MH prediction method [9] is used to recover the image initially. Then, in this paper, we divide the initial recovery xint into D overlapped patches represented by xk, 1 ≤ k ≤ D, where S2 is the patch size. This partitioning process is formulated as

$$ {x}_k={E}_k{x}^{\mathrm{int}} $$
(3)

where Ek is a matrix operator that extracts the patch xk from xint in an overlapped way. We consider the residual between each patch and the linear combination of its similar patches since the residual exhibits stronger sparsity [15]. Considering that different residuals have different sparsity, we use the way of reweighting. Simultaneously, the sparsity of the image patch itself is taken into account in the model of this paper and we enhance sparsity by reweighted ℓ1 minimization [13]. Thus the reweighted double sparse constraint model is expressed as

$$ x=\underset{x}{\arg \min}\frac{1}{2}{\left\Vert y- Hx\right\Vert}_2^2+{\lambda}_1{\sum}_{k=1}^D{\left\Vert {W}_1\left({x}_k-u\right)\right\Vert}_1+{\lambda}_2{\sum}_{k=1}^D{\left\Vert {W}_2{x}_k\right\Vert}_1 $$
(4)

where λ1 and λ2 denote regularization constants, y represents the corresponding measurement, W1 and W2 are reweights and updated iteratively, u is the linear combination of the similar patches, and H is a matrix whose elements on the dialog are the measurement matrix Ψ of size M × S2. The expression of H is written as

$$ H=\left[\begin{array}{cccc}\Psi & 0& \cdots & 0\\ {}0& \Psi & \cdots & 0\\ {}\vdots & \vdots & \ddots & 0\\ {}0& 0& 0& \Psi \end{array}\right] $$
(5)

The residual model W1(x − u)1 exploits the nonlocal similarity and the ℓ1 regularization term of image patch W2x1 exploits the local self-similarity. W1 is used to discriminatively weight different residual coefficients, W2 is used to reweight ℓ1 minimization. Sparsity can be further enhanced by combining W1 and W2. In the proposed model, the nonlocal similarity and the local self-similarity are combined effectively to further improve the reconstruction quality.

Next, we will introduce the model in detail, including the residual model, reweighted sparse representation, and weight estimation.

2.1 Residual model

In this paper, similar patches are searched within an L × L search window centered at the location of the patch xk. We select C correlated patches based on similarity—the most similar C patches, denoted by mk, i, 1 ≤ i ≤ C. The similarity between patches is measured by using the mean squared error (MSE) [16], which has an expression of

$$ \mathrm{MSE}\left({x}_k,{m}_{k,i}\right)=\frac{1}{S^2}{\left\Vert {x}_k-{m}_{k,i}\right\Vert}_2^2 $$
(6)

where S2 is the size of image patch.

The residual of the image patch xk is obtained from the original value of the patch and the linear combination of its similar patches mk, i, 1 ≤ i ≤ C and can be expressed as

$$ R\left({x}_k\right)={x}_k-{\sum}_{i=1}^C{\alpha}_{k,i}{m}_{k,i} $$
(7)

where αk, i is the weight of the similar patches and directly represents the accuracy of the similar patches. The patches that are more similar to the image patch xk should be assigned greater weight. Therefore, αk, i is proportional to the similarity between similar patches mk, i and image patch xk, that is

$$ {\alpha}_{k,i}=\frac{\exp \left(-\mathrm{MSE}\left({x}_k,{m}_{k,i}\right)/h\right)}{\sum_{i=1}^C\exp \left(-\mathrm{MSE}\left({x}_k,{m}_{k,i}\right)/h\right)} $$
(8)

where h is a constant. By the Eq. (8), we can calculate the weight of similar patches to reflect the similarity efficiently.

2.2 Reweighted sparse representation

The sparsity of the signal residual in some domain is well accepted and adopted [15, 16]. At present, there are various sparse transform methods, such as Fourier transform, wavelet transform, Gabor transform, and so on. In this paper, we use the DCT transform to sparse the residual and the image patch for its high efficiency and low complexity. The expression of the sparse transform of the residual by the DCT is

$$ \Phi \left({x}_k\right)=\varphi R\left({x}_k\right)=\varphi \left({x}_k-{\sum}_{i=1}^C{\alpha}_{k,i}{m}_{k,i}\right)={\overset{\smile }{x}}_k-{\sum}_{i=1}^C{\alpha}_{k,i}{\overset{\smile }{m}}_{k,i} $$
(9)

where φ denotes the DCT basis, φT is the transpose of DCT basis, and \( {\overset{\smile }{x}}_k=\varphi {x}_k,\kern0.5em {\overset{\smile }{m}}_{k,i}=\varphi {m}_{k,i} \).

The residual is constrained by weighting, and the residual sparse expression obtained by combining Eq. 9 is as follows

$$ {S}_R\left({x}_k\right)={W}_{1,k}\Phi \left({x}_k\right)={W}_{1,k}\varphi R\left({x}_k\right)={W}_{1,k}\left({\overset{\smile }{x}}_k-{\sum}_{i=1}^C{\alpha}_{k,i}{\overset{\smile }{m}}_{k,i}\right) $$
(10)

where W1, k is a diagonal matrix with \( {w}_{k,1},\dots, {w}_{k,{\mathrm{S}}^2} \) on the diagonal and zero elsewhere. Through Eq. (10), we can get the reweighted sparse representation of the residual. The calculation of the weight will be described in detail in the next section.

2.3 Weight estimation

In each iteration, we need to update the weights. The weights are used to constrain each residual value Φ(xk). The more similar the patches mk, i are to the target patch xk, the greater the weight it takes. Therefore, the weight W1, k in Eq. (10) is inversely proportional to the magnitude of the residual value [13]. In this paper, we can calculate the weight from the following equation

$$ {w}_{k,l}=\frac{1}{\Phi \left({x}_k\right)+1}=\frac{1}{\left|{\overset{\smile }{x}}_k-{\overset{\smile }{u}}_k\right|+1} $$
(11)

where \( {\overset{\smile }{u}}_k={\sum}_{i=1}^C{\alpha}_{k,i}{\overset{\smile }{m}}_{k,i} \).

For the image patch xk, the expression of its reweighted sparse representation is

$$ S\left({x}_k\right)={W}_{2,k}\Phi \left({x}_k\right)={W}_{2,k}\varphi {x}_k={W}_{2,k}{\overset{\smile }{x}}_k $$
(12)

where W2, k is a diagonal matrix which is similar to W1, k, whose elements is \( {\widehat{w}}_{k,1},\dots, {\widehat{w}}_{k,{S}^2} \) on the diagonal. We use the image patch xk to calculate the weight by

$$ {\widehat{w}}_{k,l}=\frac{1}{\left|{\overset{\smile }{x}}_k\right|+1} $$
(13)

2.4 Solution to the proposed model

In this section, we will detail the solution procedure. Since Eq. (4) is an unconstrained problem, if the problem is solved directly, the calculation amount is large and the calculation complexity is high. To simplify calculations, we use the idea of the SBI algorithm to transform an unconstrained problem into several simpler sub-problems and solve them separately [17]. Firstly, we will briefly introduce the main steps of the SBI algorithm. The expression of constraint optimization problem is as follows

$$ {\min}_{u\in {\mathrm{\mathbb{R}}}^N,v\in {\mathrm{\mathbb{R}}}^M}f(u)+g(v),s.t.u= Gv $$
(14)

where GRN × M, f : N → , g : M → . The SBI algorithm solution process is shown in Algorithm 1.

figure a

Now, we are solving Eq. (4) in the SBI framework. We first need to introduce a variable z to convert Eq. (4) into an equivalent constraint expression

$$ \left(x,z\right)=\underset{x.z}{\arg \min}\frac{1}{2}{\left\Vert y- Hz\right\Vert}_2^2+{\lambda}_1{\sum}_{k=1}^D{\left\Vert {W}_1\left({x}_k-u\right)\right\Vert}_1+{\lambda}_2{\sum}_{k=1}^D{\left\Vert {W}_2{x}_k\right\Vert}_1s.t.z=x $$
(15)

Then, invoking SBI, line 3 in algorithm 1 becomes

$$ {z}^{\left(t+1\right)}=\arg \min \frac{1}{2}{\left\Vert y- Hz\right\Vert}_2^2+\frac{\eta }{2}{\left\Vert z-{x}^{(t)}-{b}^{(t)}\right\Vert}_2^2 $$
(16)

Line 4 in algorithm 1 becomes

$$ {x}^{\left(t+1\right)}=\arg \min \frac{\eta }{2}{\left\Vert {z}^{\left(t+1\right)}-x-{b}^{(t)}\right\Vert}_2^2+{\lambda}_1{\sum}_{k=1}^D{\left\Vert {W}_1\left({x}_k-u\right)\right\Vert}_1+{\lambda}_2{\sum}_{k=1}^D{\left\Vert {W}_2{x}_k\right\Vert}_1 $$
(17)

Line 5 in algorithm 1 becomes

$$ {b}^{\left(t+1\right)}={b}^{(t)}-\left({z}^{\left(t+1\right)}-{x}^{\left(t+1\right)}\right) $$
(18)

where t represents the number of iterations and η is a fixed value parameter in the SBI. Equation (4) has been transformed by the SBI algorithm into a z sub-problem of Eq. (16) and an x sub-problem of Eq. (17). Next, we will give the detailed solution procedure for two sub-problems.

2.4.1 Z sub-problem

For a given x, the z sub-problem is a strictly convex quadratic function. In order to avoid calculating the inverse of the matrix and reduce the computational complexity of the algorithm, we use the steepest descent method (SD) [18] to solve this problem. For the Eq. (16), we have the iterative formula as

$$ {z}^{\left(t,i+1\right)}={z}^{\left(t,i\right)}-{\rho}^{\left(t,i\right)}{g}^{\left(t,i\right)} $$
(19)

where g(t, i) denotes a gradient, ρ(t, i) indicates the optimal step. t and i represent the number of iteration numbers of the SBI and SD, respectively. The gradient and the optimal step are calculated as

$$ {g}^{\left(t,i\right)}=\nabla f\left({z}^{\left(t,i\right)}\right)=2{H}^T\left({Hz}^{\left(t,i\right)}-y\right)+\eta \left({z}^{\left(t,i\right)}-{x}^{(t)}-{b}^{(t)}\right) $$
(20)
$$ \rho ={g}^Tg/{g}^T Qg $$
(21)

For the above equation, we have

$$ {g}^{\left(t,i\right)}={Qz}^{\left(t,i\right)}-b $$
(22)

So, combining Eq. (20) and Eq. (22), we get

$$ Q=2{H}^TH+\eta $$
(23)

Then we put Eq. (23) into Eq. (21)

$$ \rho ={g}^Tg/{g}^T\left(2{H}^TH+\eta \right)g $$
(24)

The gradient and optimal step are calculated by the above equation, and the output of the SD is an updated value of z(t, i _ max), where i _ max is 300 in our experimental setting.

2.4.2 X sub-problem

For a given z, we can rewrite Eq. (17) as

$$ {x}^{\left(t+1\right)}=\arg \min \frac{\eta }{2}{\left\Vert {r}^{(t)}-x\right\Vert}_2^2+{\lambda}_1{\sum}_{k=1}^D{\left\Vert {W}_1\left({x}_k-u\right)\right\Vert}_1+{\lambda}_2{\sum}_{k=1}^D{\left\Vert {W}_2{x}_k\right\Vert}_1 $$
(25)

where r(t) = z(t + 1) − b(t).

In each iteration, we assume that the difference between the elements of x and r(t) satisfies the condition of an independent distribution with a mean of zero. Therefore, according to the theorem [19], we have

$$ {\left\Vert x-{r}^{(t)}\right\Vert}_2^2=\frac{N}{K}{\sum}_{k=1}^D{\left\Vert {x}_k-{r}_k^{(t)}\right\Vert}_2^2 $$
(26)

where K = D × S2.

Due to the orthonormality of DCT and the Plancherel theorem, we get

$$ {\left\Vert {x}_k-{r}_k^{(t)}\right\Vert}_2^2={\left\Vert {\overset{\smile }{x}}_k-{\overset{\smile }{r}}_k^{(t)}\right\Vert}_2^2 $$
(27)

Combining Eqs. (10), (12), (25), (26), and (27), we can get

$$ {\displaystyle \begin{array}{c}{x}^{\left(t+1\right)}=\arg \min \frac{1}{2}{\sum}_{k=1}^D{\left\Vert {{\overset{\smile }{r}}_{\mathrm{k}}}^{(t)}-{\overset{\smile }{x}}_k\right\Vert}_2^2+\frac{K{\lambda}_1}{N\eta}{\sum}_{k=1}^D{\left\Vert {W}_1\left({\overset{\smile }{x}}_k-\overset{\smile }{u}\right)\right\Vert}_1+\frac{K{\lambda}_2}{N\eta}{\sum}_{k=1}^D{\left\Vert {W}_2{\overset{\smile }{x}}_k\right\Vert}_1\\ {}=\arg \min {\sum}_{k=1}^D\left(\frac{1}{2}{\left\Vert {{\overset{\smile }{r}}_{\mathrm{k}}}^{(t)}-{\overset{\smile }{x}}_k\right\Vert}_2^2+\frac{K{\lambda}_1}{N\eta}{\left\Vert {W}_1\left({\overset{\smile }{x}}_k-\overset{\smile }{u}\right)\right\Vert}_1+\frac{K{\lambda}_2}{N\eta}{\left\Vert {W}_2{\overset{\smile }{x}}_k\right\Vert}_1\right)\end{array}} $$
(28)

which we decompose into D sub-problems as follows

$$ {\overset{\smile }{x}}_k^{\left(t+1\right)}=\arg \min \frac{1}{2}{\left\Vert {{\overset{\smile }{r}}_{\mathrm{k}}}^{(t)}-{\overset{\smile }{x}}_k\right\Vert}_2^2+\frac{K{\lambda}_1}{N\eta}{\left\Vert {W}_1\left({\overset{\smile }{x}}_k-\overset{\smile }{u}\right)\right\Vert}_1+\frac{K{\lambda}_2}{N\eta}{\left\Vert {W}_2{\overset{\smile }{x}}_k\right\Vert}_1 $$
(29)

For the Eq. (29), we convert it to a scalar problem, that is

$$ y=\arg \min \frac{1}{2}{\left(d-v\right)}^2+{\theta}_1\left|v\right|+{\theta}_2\left|v-k\right| $$
(30)

where θ1 = 1W1/, θ2 = 2W2/. The scalars y, d, v, k in Eq. (30) correspond to the four vectors \( {\overset{\smile }{x}}_k^{\left(t+1\right)},{{\overset{\smile }{r}}_{\mathrm{k}}}^{(t)},{\overset{\smile }{x}}_k,\overset{\smile }{u} \) in Eq. (29).

Through the soft threshold algorithm [20], we obtain the closed-form solution of Eq. (30)

$$ y=\left\{{}_{S_2(d),k<0}^{S_1(d),k\ge 0}\right. $$
(31)

And the expressions of S1(d) and S2(d) are as follows

$$ {S}_1(d)=\left\{\begin{array}{c}d-{\theta}_1-{\theta}_2,d>{\theta}_1+{\theta}_2+k\\ {}k,k+{\theta}_1-{\theta}_2\le d\le k+{\theta}_1+{\theta}_2\\ {}d-{\theta}_1+{\theta}_2,{\theta}_1-{\theta}_2<d<k+{\theta}_1-{\theta}_2\\ {}0,-{\theta}_1-{\theta}_2\le d\le {\theta}_1-{\theta}_2\\ {}d+{\theta}_1+{\theta}_2,d<-{\theta}_1-\theta \end{array}\right. $$
(32)
$$ {S}_2(d)=\left\{\begin{array}{c}d-{\theta}_1-{\theta}_2,d>{\theta}_1+{\theta}_2\\ {}0,{\theta}_2-{\theta}_1\le d\le {\theta}_1+{\theta}_2\\ {}d+{\theta}_1-{\theta}_2,k-{\theta}_1+{\theta}_2<d<{\theta}_2-{\theta}_1\\ {}k,k-{\theta}_1-{\theta}_2\le d\le k-{\theta}_1+{\theta}_2\\ {}d+{\theta}_1+{\theta}_2,d<k-{\theta}_1-{\theta}_2\end{array}\right. $$
(33)

Through Eq. (32) and Eq. (33), we can solve the x sub-problem and obtain values of \( {\overset{\smile }{x}}_k^{\left(t+1\right)} \). The value of the corresponding patch is calculated as

$$ {x}_k^{\left(t+1\right)}={\varphi}^T{\overset{\smile }{x}}_k^{\left(t+1\right)} $$
(34)

After obtaining all the patches xk, 1 ≤ k ≤ D, we calculate the value of each pixel by averaging the patches at the pixel position. The expression of the reconstruction procedure is

$$ {x}^{\left(t+1\right)}={\sum}_{k=1}^D{E}_k^T{x}_k^{\left(t+1\right)}./{\sum}_{k=1}^D{E}_k^T{E}_k $$
(35)

After solving the z sub-problem and the x sub-problem, update b through Eq. (16) and repeat the three steps until all iterations are completed. The detailed description of the algorithm was given in Algorithm 2.

figure b

3 Results and discussion

In this section, extensive experiments are done to verify the performance of reconstructed image, and we compare the reconstruction quality between the proposed algorithm and other four algorithms. The model parameters are set as follows: the image patch size is 64, the size of the searching window is 20 × 20, and the rest of the parameters will be given in detail. All of the experiments in this paper are done on Intel (R) Core (TM) i3, 3.0 GHz CPU processor, and MATLAB 2012b on Windows 10 operating system.

In order to evaluate the quality of the reconstructed image, we adopt the peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) that are commonly used for objective assessments of image quality. Higher PSNR means that the reconstructed quality is better, and the original image is clearly reconstructed. The lower PSNR indicates that the reconstructed quality is poor, and there are problems such as edge blurring. SSIM is used to reflect the similarity between the original image and the reconstructed image in structure. The experimental results show the PSNR and SSIM of image reconstructed by the proposed algorithm and other four algorithms. All experimental standard greyscale images are given in Fig. 1.

Fig. 1
figure 1

All experimental images

3.1 Discussion

3.1.1 Complexity of search window size

A large searching size makes it possible to find patches that are more similar to the target patch so that a more accurate model could be obtained. However, it also brings about higher computational complexity. In the experiment, we tested the calculation time of image Boat at different ratios. We demonstrate the algorithm complexity for different search window size L in Fig. 2.

Fig. 2
figure 2

Complexity analysis of search window size L

3.1.2 Effect of similar patches

In order to test the effect of the number of similar patches, experiments on three pictures with different C ranging from 2 to 24 are conducted. From the experimental results in Fig. 3, we can see that the proposed algorithm in this paper is insensitive to the number of similar patches, because all the curves are close to flat. For different images, the number of similar patches shows a certain degree of similarity. According to extensive experimental results, all test images can achieve the highest and most stable performance when C is 10. Therefore, we took a value 10 that is a little bigger than 8. Simultaneously, the reason why C does not take a value larger than 10 is to reduce the computational complexity based on obtaining stable performance.

Fig. 3
figure 3

Performance comparison of various C for three images

3.1.3 Effect of regularization parameters

In this section, we test the effect of regularization parameters λ1 and λ2 to the performance of reconstructed image.

In order to study the effect of regularization parameters on the performance, we need to test them for different values, separately. λ1, λ2 will affect the performance of the model at the same time; therefore, we need to determine one parameter and test the other parameter. It can be seen from Fig. 4 that if the magnitude of λ1, λ2 is too large or too small, the image reconstruction performance will be affected. What is more, for different images, the effect of λ1, λ2 to the PSNR shows consistency, that means, there are optimal regularization parameters λ1 and λ2 that make the performance of the algorithm best under the condition of different images. In this paper, we set λ1 = 2.5e − 3, λ2 = 2.5e − 4.

Fig. 4
figure 4

Performance comparison of various λ1, λ2 for three images

Next, to verify the high performance of the proposed algorithm, we discuss the case that the value of λ1 and λ2 are set to zero separately. The experimental results are shown in Fig. 5. For different images, the PSNR of images that reconstructed by separate sparse constraint terms is slightly lower to the PSNR of images that reconstructed by proposed model, that is, the performance of proposed model that combines the nonlocal similarity and local self-similarity is better than the performance of the models that utilize the nonlocal sparsity and the local self-similarity separately. The proposed model achieves the highest PSNR. For image Boat in Fig. 5, the PSNR of the proposed algorithm generally improves 0.01 dB and 1 dB in comparison with the separate sparse constraint models. For image Lena in Fig. 5, the PSNR of the proposed algorithm generally improves 0.06 dB and 0.4 dB in comparison with the separate sparse constraint models.

Fig. 5
figure 5

Performance comparison between separate model and proposed model

3.1.4 Effect of weight parameter

The value of the parameter h in the weight αk, i influences the accuracy of the similar patches, and further affects the performance of the reconstructed image. Therefore, we take the value of h from 1 to 120 to test the reconstruction performance. A lot of experiments are done under three images. It can be seen from Fig. 6 that the value of the parameter h has a certain influence on the quality of the reconstructed image. The PSNR shows a certain stability and reaches the maximum value within the range from 70 to 120. Different test images show consistency, that is, different reconstructed images can achieve the best reconstruction performance in the same range of h. Therefore, in this paper, we set h = 80 based on the experimental results.

Fig. 6
figure 6

Performance comparison of various h for three Images

3.2 Reconstruction results

In this paper, we have the original image xN and its corresponding measurement yM. H is a measurement matrix whose size is M × N. Compressive sensing is intended to recover the original high-quality image from the measured value with high probability. The measurement rate is expressed by the ratio and is equal to M/N.

In our simulation experiments, the measurements are obtained by applying a Gaussian random projection matrix to the original image. We present the reconstruction results of four representative compressive sensing reconstruction algorithms, including BCS-SPL-DCT [6] and BCS-SPL-DWT [6], MH method [9], and collaborative sparsity (CoS) method [11]. It is worth noting that the MH method and the CoS method are known as the current advanced algorithms in image CS reconstruction. We show the PSNR/SSIM of reconstructed images in Table 1.

Table 1 PSNR and SSIM comparisons with various image reconstruction algorithm

We visually and intuitively compare the image reconstructed by the proposed algorithm with the images reconstructed by the other four algorithms. The comparison results are shown in Figs. 7, 8, 9, 10, 11, and 12.

Fig. 7
figure 7

The reconstruction quality comparison of image GoldHill in the case of ratio is 20%. From left to right: original image, the reconstructed image by MH (PSNR = 30.16 dB, SSIM = 0.82), the reconstructed image by BCS_SPL_DCT (PSNR = 27.95 dB, SSIM = 0.69), the reconstructed image by BCS_SPL_DWT (PSNR = 25.22 dB, SSIM = 0.64), the reconstructed image by COS (PSNR = 30.59 dB, SSIM = 0.81), the reconstructed image by proposed method (PSNR = 31.28 dB, SSIM = 0.94)

Fig. 8
figure 8

The reconstruction quality comparison of image Barbara in the case of ratio is 20%. From left to right: original image, the reconstructed image by MH (PSNR = 31.21 dB, SSIM = 0.91), the reconstructed image by BCS_SPL_DCT (PSNR = 24.36 dB, SSIM = 0.71), the reconstructed image by BCS_SPL_DWT (PSNR = 22.24 dB, SSIM = 0.59), the reconstructed image by COS (PSNR = 25.60 dB, SSIM = 0.78), the reconstructed image by proposed method (PSNR = 32.81 dB, SSIM = 0.98)

Fig. 9
figure 9

The reconstruction quality comparison of image Cameraman in the case of ratio is 30%. From left to right: original image, the reconstructed image by MH (PSNR = 36.39 dB, SSIM = 0.96), the reconstructed image by BCS_SPL_DCT (PSNR = 33.03 dB, SSIM = 0.93), the reconstructed image by BCS_SPL_DWT (PSNR = 26.65 dB, SSIM = 0.86), the reconstructed image by COS (PSNR = 37.25 dB, SSIM = 0.96), the reconstructed image by proposed method (PSNR = 39.77 dB, SSIM = 0.99)

Fig. 10
figure 10

The reconstruction quality comparison of image Boat in the case of ratio is 30%. From left to right: original image, the reconstructed image by MH (PSNR = 31.00 dB, SSIM = 0.86), the reconstructed image by BCS_SPL_DCT (PSNR = 28.88 dB, SSIM = 0.79), the reconstructed image by BCS_SPL_DWT (PSNR = 24.24 dB, SSIM = 0.68), the reconstructed image by COS (PSNR = 32.27 dB, SSIM = 0.88), the reconstructed image by proposed method (PSNR = 33.08 dB, SSIM = 0.96)

Fig. 11
figure 11

The reconstruction quality comparison of image Lena in the case of ratio is 40%. From left to right: original image, the reconstructed image by MH (PSNR = 36.23 dB, SSIM = 0.94), the reconstructed image by BCS_SPL_DCT (PSNR = 34.20 dB, SSIM = 0.91), the reconstructed image by BCS_SPL_DWT (PSNR = 28.50 dB, SSIM = 0.81), the reconstructed image by COS (PSNR = 36.17 dB, SSIM = 0.93), the reconstructed image by proposed method (PSNR = 37.54 dB, SSIM = 0.98)

Fig. 12
figure 12

The reconstruction quality comparison of image Peppers in the case of ratio is 40%. From left to right: original image, the reconstructed image by MH (PSNR = 35.59 dB, SSIM = 0.91), the reconstructed image by BCS_SPL_DCT (PSNR = 31.75 dB, SSIM = 0.88), the reconstructed image by BCS_SPL_DWT (PSNR = 28.11 dB, SSIM = 0.76), the reconstructed image by COS (PSNR = 35.78 dB, SSIM = 0.91), the reconstructed image by proposed method (PSNR = 36.60 dB, SSIM = 0.98)

The figures show the PSNR and SSIM of the images reconstructed by different reconstruction algorithms at ratios of 20%, 30%, and 40%, respectively. According to experimental results, it can be seen that the images reconstructed by the BCS-SPL-DCT and BCS-SPL-DWT are generally fuzzy, the texture structure is not clear enough, and the visual effects are worse than the other three algorithms. Images reconstructed by MH algorithm and the CoS algorithm all have better image quality, and the visual intuition effects are not much different. Especially the CoS algorithm reconstructs the image with stronger texture and higher quality. In terms of the PSNR, the images reconstructed by proposed algorithm are slightly higher than images reconstructed by the MH algorithm and CoS algorithm, that is, the algorithm proposed in this paper can effectively improve the quality of image reconstruction and is generally better than the above four classic compressive sensing reconstruction algorithms.

4 Conclusion

In this paper, we propose a reweighted double sparse constraint reconstruction model. The model not only takes full advantage of the nonlocal similarity the and local self-similarity by using the residual model and ℓ1 regularization to enhance the sparsity but also establishes the model by the means of reweighting to further improve the quality of reconstructed image. The model is mathematically defined as a solution to the reweighted ℓ1 minimization problem; an effective solution based on the SBI framework was designed to reduce the computational complexity. Under the SBI framework, the model is transformed from an unconstrained problem to multiple simple sub-problems. In this paper, we use the steepest descent method and the soft threshold algorithm to solve these sub-problems, separately. A large number of experiments demonstrate that the proposed image reconstruction model has better reconstruction quality and visual effect than other four representative algorithms. Future work includes image block matching optimization and reducing computational complexity.

References

  1. E.J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theor. 52(2), 489–509 (2006)

    Article  MathSciNet  Google Scholar 

  2. K.V. Siddamal, S.P. Bhat, V.S. Saroja, IEEE 2nd International Conference on Electronics and Communication Systems. A survey on compressive sensing (2015), pp. 639–643

    Google Scholar 

  3. L. Gan, IEEE 12nd International Conference on Digital Signal Processing. Block Compressed Sensing of Natural Images (2007), pp. 403–406

    Google Scholar 

  4. A.M. Bruckstein, D.L. Donoho, M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51(1), 34–81 (2009)

    Article  MathSciNet  Google Scholar 

  5. J. Mairal, M. Elad, G. Sapiro, Sparse representation for color image restoration. IEEE Trans. Image Process. 17(1), 53–69 (2007)

    Article  MathSciNet  Google Scholar 

  6. S. Mun, J.E. Fowler, IEEE 17th International Conference on Image Processing. Block Compressed Sensing of Images Using Directional Transforms (2010), p. 547

    Google Scholar 

  7. S. Kindermann, S. Osher, P.W. Jones, Deblurring and denoising of images by nonlocal functionals. SIAM J. Multiscale Model. Simul. 4(4), 1091–1115 (2005)

    Article  MathSciNet  Google Scholar 

  8. X. Zhang, M. Burger, X. Bresson, Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM J. Imag. Sci. 3(3), 253–276 (2010)

    Article  MathSciNet  Google Scholar 

  9. C. Chen, E.W. Tramel, J.E. Fowler, IEEE 46th Signals, systems and computers. Compressed-sensing recovery of images and video using multihypothesis predictions (2012), pp. 1193–1198

    Google Scholar 

  10. J. Zhang, D. Zhao, C. Zhao, Image compressive sensing recovery via collaborative sparsity. IEEE J. Emerging Sel. Top. Circuits Syst. 2(3), 380–391 (2012)

    Article  MathSciNet  Google Scholar 

  11. W. Dong, G. Shi, X. Li, Compressive sensing via nonlocal low-rank regularization. IEEE Trans. Image Process. 23(8), 3618–3632 (2014)

    Article  MathSciNet  Google Scholar 

  12. J. Zhang, C. Zhao, D. Zhao, W. Gao, Image compressive sensing recovery using adaptively learned sparsifying basis via ℓ0 minimization. Signal Process. 103(10), 114–126 (2014)

    Article  Google Scholar 

  13. E.J. Candès, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted L1 minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2007)

    MATH  Google Scholar 

  14. M.S. Asif, J. Romberg, Fast and accurate algorithms for re-weighted ℓ1-norm minimization. IEEE Trans. Signal Process. 61(23), 5905–5916 (2013)

    Article  MathSciNet  Google Scholar 

  15. S. Mun, J.E. Fowler, IEEE data compression conference. Residual reconstruction for block-based compressed sensing of video (2011), pp. 183–192

    Google Scholar 

  16. C. Zhao, S. Ma, J. Zhang, R. Xiong, W. Gao, Video compressive sensing reconstruction via reweighted residual sparsity. IEEE Trans. Circuits Syst. Video Technol. 27(6), 1182–1195 (2017)

    Article  Google Scholar 

  17. T. Goldstein, The split Bregman algorithm for ℓ1 regularized problem. SIAM J. Imag. Sci. 4(2), 323–343 (2009)

    Article  Google Scholar 

  18. P. Deift, X. Zhou, A steepest descent method for oscillatory Riemann-Hilbert problems. Ann. Math. 137(2), 295–368 (1993)

    Article  MathSciNet  Google Scholar 

  19. J. Zhang, C. Zhao, D. Zhao, Image compressive sensing recovery using adaptively learned sparsifying basis via L0 minimization. Signal Process. 103(10), 114–126 (2014)

    Article  Google Scholar 

  20. D.L. Donoho, De-noising by soft-thresholding. IEEE Trans. Inf. Theory 41(3), 613–627 (2002)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors thank the editor and reviewers.

Funding

Financial support for this work was provided by the National Natural Science Foundation of China [grant number 61501069] and the Fundamental Research Funds for the central Universities [Grant number 106112016CDJXZ168815].

Availability of data and materials

The datasets supporting the conclusions of this article are included within the article.

Author information

Authors and Affiliations

Authors

Contributions

YZ conceived and designed the experiments. JZ and XC performed the experiments and analyzed the data. GH and ZZ wrote the paper. ZH put forward constructive comments. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yuanhong Zhong.

Ethics declarations

Authors’ information

  1. 1.

    Yuanhong Hong received his BS, MS, and PhD degrees in commutation engineering from Chongqing University, Chongqing, China in 2003, 2006, and 2011, respectively. He is currently an Associate Professor with the School of Microelectronics and Communication Engineering, Chongqing University. His research interests include image processing, machine learning and computer vision.

  2. 2.

    Jing Zhang received her BS degree in commutation engineering from Chongqing University, Chongqing, China in 2018. And now she is a postgraduate in School of Microelectronics and Communication Engineering at Chongqing University.

  3. 3.

    Xinyu Cheng received his BS degree in electronic information engineering from Chongqing University, Chongqing, China in 2018. And now he is a postgraduate in School of Microelectronics and Communication Engineering at Chongqing University.

  4. 4.

    Guan Huang received her BS degree in commutation engineering from Chongqing University, Chongqing, China in 2018. And now she is a postgraduate in School of Automotive Engineering at Chongqing university.

  5. 5.

    Zhaokun Zhou received his BS degree in commutation engineering from Chongqing University, Chongqing, China in 2018. And now he is a postgraduate in School of Automotive Engineering at Chongqing university.

  6. 6.

    Zhiyong Huang received the B.Sc. degree in electric engineering and the Ph.D. degree in electronic engineering from Chongqing University, Chongqing, China, in 2001 and 2009, respectively. He is currently an Associate Professor with the School of Microelectronics and Communication Engineering, Chongqing University. His research interests include image/video processing, computer vision, and machine learning.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhong, Y., Zhang, J., Cheng, X. et al. Reconstruction for block-based compressive sensing of image with reweighted double sparse constraint. J Image Video Proc. 2019, 63 (2019). https://doi.org/10.1186/s13640-019-0464-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-019-0464-1

Keywords