- Research
- Open Access

# Reconstruction for block-based compressive sensing of image with reweighted double sparse constraint

- Yuanhong Zhong
^{1}Email authorView ORCID ID profile, - Jing Zhang
^{1}, - Xinyu Cheng
^{1}, - Guan Huang
^{1}, - Zhaokun Zhou
^{1}and - Zhiyong Huang
^{1}

**2019**:63

https://doi.org/10.1186/s13640-019-0464-1

© The Author(s). 2019

**Received:**18 September 2018**Accepted:**30 April 2019**Published:**24 May 2019

## Abstract

Block compressive sensing reduces the computational complexity by dividing the image into multiple patches for processing, but the performance of the reconstruction algorithm is decreased. Generally, the reconstruction algorithm improves the quality of reconstructed image by adding various constraints and regularization terms, namely prior information. In this paper, a reweighted double sparse constraint reconstruction model which combines the residual sparsity and ℓ1 regularization term is proposed. The residual sparsity aims to exploit the nonlocal similarity of image patches, and the ℓ1 regularization term is used to utilize the local sparsity of image patches. The resulting model is solved under the frame of split Bregman iteration (SBI). A large number of experiments show that the algorithm in this paper can reconstruct the original image efficiently and is comparable to the current representative compressive sensing reconstruction algorithm.

## Keywords

- Image reconstruction
- Compressive sensing
- Reweighted double sparse constraint

## 1 Introduction

Compressive sensing (CS) theory proposed by Candès et al. [1] breaks the limitation of Nyquist sampling theorem, that is, it can recover the original signal at a sampling rate lower than twice the bandwidth if the signal is sparse in some domain. CS theory has been widely used in various fields since its birth, such as nuclear magnetic resonance, image processing, analog to information conversion, compressive radar, etc. [2]

*x*∈ ℝ

^{N}is a finite length signal and it is sparse or compressible. According to the CS theory [1], rather than observing the original signal x directly, a much lower linear measurements are acquired by the following random linear projection:

*y*∈ ℝ

^{M}is the measurement of the signal,

*H*∈ ℝ

^{M × N}represents the measurement matrix, and

*n*denotes additive noise. Since the value

*M*is much smaller than the value of

*N*, the solution to the Eq. (1) is an ill-posed inverse problem.

*x*) is a regularization term which have various choices such as ℓ0 \ ℓ1\total variation (TV), and

*λ*is a regularization parameter.

Since natural images are almost always compressible, the application of compression sensing is natural. However, if compressive sensing is applied to the large-sized image directly, the measurement matrix would be very “large,” which leads to huge amounts of memory and further resulting in high computational complexity. Therefore, the block-based compressive sensing (BCS) method [3] is proposed. During the image processing of BCS, the image is divided into multiple image patches and each image patch is operated separately, so that the computational complexity is greatly reduced. Nevertheless, the quality of the reconstructed image is degraded compared to the quality of the image reconstructed by the entire image.

For the prior knowledge has a crucial influence on the performance of the image reconstruction algorithm, designing an effective regularization term is beneficial to make full use of image prior information and further improve the quality of the reconstructed image. The sparsity and nonlocal similarity, which are the most important properties of the images, are utilized to improve the quality of reconstructed images. The sparsity aims to represent the original image with a little nonzero value or approximate zero, more specifically, the sparsity is to organize the original image more sparsely in some domain [4, 5]. Currently, different predetermined transform basis, including discrete cosine transform (DCT), discrete wavelet transform (DWT), and so on, has been used to exploit the sparsity to further derive some reconstruction algorithm, such as smooth prediction Landweber of BCS based on DCT (BCS-SPL-DCT) [6] and BCS based on DWT (BCS-SPL-DWT) [6]. Furthermore, to enrich the texture and structure in recovered images [7, 8], the multi-hypothesis (MH) prediction [9] method which explores the nonlocal similarities was proposed. By sharing the similar idea [10–12] where nonlocal similarities are exploited to design the local sparsifying transform, these methods can achieve better recovery performance than the algorithms that were previously designed for BCS without using nonlocal similarities. However, the recovered images still contain some visual artifacts.

For better CS recovery, some researchers used the idea of weighting the signal. Candès [13] proposed a weighted scheme based on the magnitude of signals to get closer to ℓ0 norm, while still using ℓ1 norm in the optimization problems. In a similar manner, Asif et al. [14] adaptively assigned weight values according to the homotopy of signals.

To better reconstruct the original image both in smooth region and edge region, this paper proposes a method that combines the sparsity and nonlocal similarity by the means of reweighting. Firstly, the residual which represents the difference between the target image patch and the similar image patch is calculated, and it should be more sparse. Therefore, residual sparsity was applied to exploit the nonlocal similarity. Secondly, reweighting applied in the sparse constraint is to consider the sparsity difference of the different image patches to enhance the sparsity. Thirdly, we use the frame of SBI to transform the proposed model into several sub problems to solve it effectively.

- 1.
We propose a reconstruction model that combines the residual sparsity and

*l*_{1}regularization term by the way of reweighting. It utilizes local sparsity and non-local self-similarity simultaneously so that the model can further improve the performance of the reconstruction algorithm. - 2.
To solve the model of reweighted double sparse constraint, we design an effective scheme based on the split Bregman iteration (SBI) algorithm. Extensive experiments of our method are conducted and compared with other typical algorithm using PSNR and SSIM.

The rest of this paper is organized as follows. Section 2 introduces the reconstruction model. There are four parts in Section 2, including residual model, reweighted sparse representation, weight estimation, and the solution of the proposed method. In Section 3, we present the plenty experiments and the results compared to other representative methods. Section 4 concludes the paper in the end.

## 2 Proposed method

*x*into non-overlapped patches represented by

*p*

_{k}, 1 ≤

*k*≤

*P*of size

*B*

^{2}. MH prediction method [9] is used to recover the image initially. Then, in this paper, we divide the initial recovery

*x*

^{int}into

*D*overlapped patches represented by

*x*

_{k}, 1 ≤

*k*≤

*D*, where

*S*

^{2}is the patch size. This partitioning process is formulated as

*E*

_{k}is a matrix operator that extracts the patch

*x*

_{k}from

*x*

^{int}in an overlapped way. We consider the residual between each patch and the linear combination of its similar patches since the residual exhibits stronger sparsity [15]. Considering that different residuals have different sparsity, we use the way of reweighting. Simultaneously, the sparsity of the image patch itself is taken into account in the model of this paper and we enhance sparsity by reweighted ℓ1 minimization [13]. Thus the reweighted double sparse constraint model is expressed as

*λ*

_{1}and

*λ*

_{2}denote regularization constants,

*y*represents the corresponding measurement,

*W*

_{1}and

*W*

_{2}are reweights and updated iteratively,

*u*is the linear combination of the similar patches, and

*H*is a matrix whose elements on the dialog are the measurement matrix Ψ of size

*M*×

*S*

^{2}. The expression of H is written as

The residual model ‖*W*_{1}(*x* − *u*)‖_{1} exploits the nonlocal similarity and the ℓ1 regularization term of image patch ‖*W*_{2}*x*‖_{1} exploits the local self-similarity. *W*_{1} is used to discriminatively weight different residual coefficients, *W*_{2} is used to reweight ℓ1 minimization. Sparsity can be further enhanced by combining *W*_{1} and *W*_{2}. In the proposed model, the nonlocal similarity and the local self-similarity are combined effectively to further improve the reconstruction quality.

Next, we will introduce the model in detail, including the residual model, reweighted sparse representation, and weight estimation.

### 2.1 Residual model

*L*×

*L*search window centered at the location of the patch

*x*

_{k}. We select

*C*correlated patches based on similarity—the most similar

*C*patches, denoted by

*m*

_{k, i}, 1 ≤

*i*≤

*C*. The similarity between patches is measured by using the mean squared error (MSE) [16], which has an expression of

*S*

^{2}is the size of image patch.

*x*

_{k}is obtained from the original value of the patch and the linear combination of its similar patches

*m*

_{k, i}, 1 ≤

*i*≤

*C*and can be expressed as

*α*

_{k, i}is the weight of the similar patches and directly represents the accuracy of the similar patches. The patches that are more similar to the image patch

*x*

_{k}should be assigned greater weight. Therefore,

*α*

_{k, i}is proportional to the similarity between similar patches

*m*

_{k, i}and image patch

*x*

_{k}, that is

*h*is a constant. By the Eq. (8), we can calculate the weight of similar patches to reflect the similarity efficiently.

### 2.2 Reweighted sparse representation

*φ*denotes the DCT basis,

*φ*

^{T}is the transpose of DCT basis, and \( {\overset{\smile }{x}}_k=\varphi {x}_k,\kern0.5em {\overset{\smile }{m}}_{k,i}=\varphi {m}_{k,i} \).

*W*

_{1, k}is a diagonal matrix with \( {w}_{k,1},\dots, {w}_{k,{\mathrm{S}}^2} \) on the diagonal and zero elsewhere. Through Eq. (10), we can get the reweighted sparse representation of the residual. The calculation of the weight will be described in detail in the next section.

### 2.3 Weight estimation

*x*

_{k}). The more similar the patches

*m*

_{k, i}are to the target patch

*x*

_{k}, the greater the weight it takes. Therefore, the weight

*W*

_{1, k}in Eq. (10) is inversely proportional to the magnitude of the residual value [13]. In this paper, we can calculate the weight from the following equation

*x*

_{k}, the expression of its reweighted sparse representation is

*W*

_{2, k}is a diagonal matrix which is similar to

*W*

_{1, k}, whose elements is \( {\widehat{w}}_{k,1},\dots, {\widehat{w}}_{k,{S}^2} \) on the diagonal. We use the image patch

*x*

_{k}to calculate the weight by

### 2.4 Solution to the proposed model

*G*∈

*R*

^{N × M},

*f*: ℝ

^{N}→ ℝ,

*g*: ℝ

^{M}→ ℝ. The SBI algorithm solution process is shown in Algorithm 1.

*z*to convert Eq. (4) into an equivalent constraint expression

*t*represents the number of iterations and

*η*is a fixed value parameter in the SBI. Equation (4) has been transformed by the SBI algorithm into a z sub-problem of Eq. (16) and an x sub-problem of Eq. (17). Next, we will give the detailed solution procedure for two sub-problems.

#### 2.4.1 Z sub-problem

*g*

^{(t, i)}denotes a gradient,

*ρ*

^{(t, i)}indicates the optimal step.

*t*and

*i*represent the number of iteration numbers of the SBI and SD, respectively. The gradient and the optimal step are calculated as

The gradient and optimal step are calculated by the above equation, and the output of the SD is an updated value of *z*^{(t, i _ max)}, where *i* _ max is 300 in our experimental setting.

#### 2.4.2 X sub-problem

*r*

^{(t)}=

*z*

^{(t + 1)}−

*b*

^{(t)}.

*x*and

*r*

^{(t)}satisfies the condition of an independent distribution with a mean of zero. Therefore, according to the theorem [19], we have

*K*=

*D*×

*S*

^{2}.

*D*sub-problems as follows

*θ*

_{1}=

*Kλ*

_{1}

*W*

_{1}/

*Nη*,

*θ*

_{2}=

*Kλ*

_{2}

*W*

_{2}/

*Nη*. The scalars

*y*,

*d*,

*v*,

*k*in Eq. (30) correspond to the four vectors \( {\overset{\smile }{x}}_k^{\left(t+1\right)},{{\overset{\smile }{r}}_{\mathrm{k}}}^{(t)},{\overset{\smile }{x}}_k,\overset{\smile }{u} \) in Eq. (29).

*S*

_{1}(

*d*) and

*S*

_{2}(

*d*) are as follows

*x*

_{k}, 1 ≤

*k*≤

*D*, we calculate the value of each pixel by averaging the patches at the pixel position. The expression of the reconstruction procedure is

After solving the z sub-problem and the x sub-problem, update *b* through Eq. (16) and repeat the three steps until all iterations are completed. The detailed description of the algorithm was given in Algorithm 2.

## 3 Results and discussion

In this section, extensive experiments are done to verify the performance of reconstructed image, and we compare the reconstruction quality between the proposed algorithm and other four algorithms. The model parameters are set as follows: the image patch size is 64, the size of the searching window is 20 × 20, and the rest of the parameters will be given in detail. All of the experiments in this paper are done on Intel (R) Core (TM) i3, 3.0 GHz CPU processor, and MATLAB 2012b on Windows 10 operating system.

### 3.1 Discussion

#### 3.1.1 Complexity of search window size

*Boat*at different ratios. We demonstrate the algorithm complexity for different search window size

*L*in Fig. 2.

#### 3.1.2 Effect of similar patches

*C*ranging from 2 to 24 are conducted. From the experimental results in Fig. 3, we can see that the proposed algorithm in this paper is insensitive to the number of similar patches, because all the curves are close to flat. For different images, the number of similar patches shows a certain degree of similarity. According to extensive experimental results, all test images can achieve the highest and most stable performance when C is 10. Therefore, we took a value 10 that is a little bigger than 8. Simultaneously, the reason why C does not take a value larger than 10 is to reduce the computational complexity based on obtaining stable performance.

#### 3.1.3 Effect of regularization parameters

In this section, we test the effect of regularization parameters *λ*_{1} and *λ*_{2} to the performance of reconstructed image.

*λ*

_{1},

*λ*

_{2}will affect the performance of the model at the same time; therefore, we need to determine one parameter and test the other parameter. It can be seen from Fig. 4 that if the magnitude of

*λ*

_{1},

*λ*

_{2}is too large or too small, the image reconstruction performance will be affected. What is more, for different images, the effect of

*λ*

_{1},

*λ*

_{2}to the PSNR shows consistency, that means, there are optimal regularization parameters

*λ*

_{1}and

*λ*

_{2}that make the performance of the algorithm best under the condition of different images. In this paper, we set

*λ*

_{1}= 2.5

*e*− 3,

*λ*

_{2}= 2.5

*e*− 4.

*λ*

_{1}and

*λ*

_{2}are set to zero separately. The experimental results are shown in Fig. 5. For different images, the PSNR of images that reconstructed by separate sparse constraint terms is slightly lower to the PSNR of images that reconstructed by proposed model, that is, the performance of proposed model that combines the nonlocal similarity and local self-similarity is better than the performance of the models that utilize the nonlocal sparsity and the local self-similarity separately. The proposed model achieves the highest PSNR. For image

*Boat*in Fig. 5, the PSNR of the proposed algorithm generally improves 0.01 dB and 1 dB in comparison with the separate sparse constraint models. For image

*Lena*in Fig. 5, the PSNR of the proposed algorithm generally improves 0.06 dB and 0.4 dB in comparison with the separate sparse constraint models.

#### 3.1.4 Effect of weight parameter

*h*in the weight

*α*

_{k, i}influences the accuracy of the similar patches, and further affects the performance of the reconstructed image. Therefore, we take the value of

*h*from 1 to 120 to test the reconstruction performance. A lot of experiments are done under three images. It can be seen from Fig. 6 that the value of the parameter

*h*has a certain influence on the quality of the reconstructed image. The PSNR shows a certain stability and reaches the maximum value within the range from 70 to 120. Different test images show consistency, that is, different reconstructed images can achieve the best reconstruction performance in the same range of

*h*. Therefore, in this paper, we set

*h*= 80 based on the experimental results.

### 3.2 Reconstruction results

In this paper, we have the original image *x* ∈ ℝ^{N} and its corresponding measurement *y* ∈ ℝ^{M}. *H* is a measurement matrix whose size is *M* × *N*. Compressive sensing is intended to recover the original high-quality image from the measured value with high probability. The measurement rate is expressed by the ratio and is equal to *M*/*N*.

PSNR and SSIM comparisons with various image reconstruction algorithm

Ratio | Algorithms | Goldhill | Barbara | Cameraman | Boat | Lena | Peppers |
---|---|---|---|---|---|---|---|

20% | MH | 30.16/0.82 | 31.21/0.91 | 33.87/0.93 | 29.34/0.82 | 32.82/0.89 | 32.81/0.86 |

BCS-SPL-DCT | 27.95/0.69 | 24.36/0.71 | 30.07/0.89 | 27.03/0.73 | 30.49/0.85 | 29.50/0.82 | |

BCS-SPL-DWT | 25.22/0.64 | 22.24/0.59 | 22.74/0.77 | 22.58/0.62 | 24.29/0.69 | 24.08/0.64 | |

CoS | 30.59/0.81 | 25.60/0.78 | 34.49/0.93 | 30.28/0.83 | 32.72/0.89 | 33.60/0.87 | |

Proposed | 31.28/0.94 | 32.81/0.98 | 36.32/0.98 | 30.87/0.95 | 33.92/0.97 | 34.03/0.97 | |

30% | MH | 31.83/0.86 | 33.60/0.94 | 36.39/0.96 | 31.00/0.86 | 34.83/0.92 | 34.31/0.89 |

BCS-SPL-DCT | 29.62/0.80 | 25.92/0.78 | 33.03/0.93 | 28.88/0.79 | 32.49/0.88 | 31.62/0.85 | |

BCS-SPL-DWT | 26.71/0.71 | 23.43/0.66 | 26.65/0.86 | 24.24/0.68 | 26.34/0.75 | 26.05/0.70 | |

CoS | 32.21/0.86 | 28.54/0.87 | 37.25/0.96 | 32.27/0.88 | 34.79/0.92 | 34.91/0.89 | |

Proposed | 33.01/0.96 | 35.11/0.99 | 39.77/0.99 | 33.08/0.96 | 35.93/0.98 | 35.38/0.97 | |

40% | MH | 33.12/0.89 | 35.33/0.95 | 38.59/0.97 | 32.78/0.89 | 36.23/0.94 | 35.59/0.91 |

BCS-SPL-DCT | 30.47/0.84 | 27.40/0.84 | 35.47/0.95 | 30.57/0.84 | 34.20/0.91 | 31.75/0.88 | |

BCS-SPL-DWT | 28.32/0.77 | 24.78/0.73 | 32.00/0.93 | 26.58/0.81 | 28.50/0.81 | 28.11/0.76 | |

CoS | 33.63/0.90 | 31.50/0.92 | 39.53/0.97 | 33.90/0.90 | 36.17/0.93 | 35.78/0.91 | |

Proposed | 34.53/0.97 | 36.89/0.99 | 42.65/1.00 | 34.87/0.98 | 37.54/0.98 | 36.60/0.98 |

The figures show the PSNR and SSIM of the images reconstructed by different reconstruction algorithms at ratios of 20%, 30%, and 40%, respectively. According to experimental results, it can be seen that the images reconstructed by the BCS-SPL-DCT and BCS-SPL-DWT are generally fuzzy, the texture structure is not clear enough, and the visual effects are worse than the other three algorithms. Images reconstructed by MH algorithm and the CoS algorithm all have better image quality, and the visual intuition effects are not much different. Especially the CoS algorithm reconstructs the image with stronger texture and higher quality. In terms of the PSNR, the images reconstructed by proposed algorithm are slightly higher than images reconstructed by the MH algorithm and CoS algorithm, that is, the algorithm proposed in this paper can effectively improve the quality of image reconstruction and is generally better than the above four classic compressive sensing reconstruction algorithms.

## 4 Conclusion

In this paper, we propose a reweighted double sparse constraint reconstruction model. The model not only takes full advantage of the nonlocal similarity the and local self-similarity by using the residual model and ℓ1 regularization to enhance the sparsity but also establishes the model by the means of reweighting to further improve the quality of reconstructed image. The model is mathematically defined as a solution to the reweighted ℓ1 minimization problem; an effective solution based on the SBI framework was designed to reduce the computational complexity. Under the SBI framework, the model is transformed from an unconstrained problem to multiple simple sub-problems. In this paper, we use the steepest descent method and the soft threshold algorithm to solve these sub-problems, separately. A large number of experiments demonstrate that the proposed image reconstruction model has better reconstruction quality and visual effect than other four representative algorithms. Future work includes image block matching optimization and reducing computational complexity.

## Declarations

### Acknowledgements

The authors thank the editor and reviewers.

### Funding

Financial support for this work was provided by the National Natural Science Foundation of China [grant number 61501069] and the Fundamental Research Funds for the central Universities [Grant number 106112016CDJXZ168815].

### Availability of data and materials

The datasets supporting the conclusions of this article are included within the article.

### Authors’ contributions

YZ conceived and designed the experiments. JZ and XC performed the experiments and analyzed the data. GH and ZZ wrote the paper. ZH put forward constructive comments. All authors read and approved the final manuscript.

### Authors’ information

- 1.
Yuanhong Hong received his BS, MS, and PhD degrees in commutation engineering from Chongqing University, Chongqing, China in 2003, 2006, and 2011, respectively. He is currently an Associate Professor with the School of Microelectronics and Communication Engineering, Chongqing University. His research interests include image processing, machine learning and computer vision.

- 2.
Jing Zhang received her BS degree in commutation engineering from Chongqing University, Chongqing, China in 2018. And now she is a postgraduate in School of Microelectronics and Communication Engineering at Chongqing University.

- 3.
Xinyu Cheng received his BS degree in electronic information engineering from Chongqing University, Chongqing, China in 2018. And now he is a postgraduate in School of Microelectronics and Communication Engineering at Chongqing University.

- 4.
Guan Huang received her BS degree in commutation engineering from Chongqing University, Chongqing, China in 2018. And now she is a postgraduate in School of Automotive Engineering at Chongqing university.

- 5.
Zhaokun Zhou received his BS degree in commutation engineering from Chongqing University, Chongqing, China in 2018. And now he is a postgraduate in School of Automotive Engineering at Chongqing university.

- 6.
Zhiyong Huang received the B.Sc. degree in electric engineering and the Ph.D. degree in electronic engineering from Chongqing University, Chongqing, China, in 2001 and 2009, respectively. He is currently an Associate Professor with the School of Microelectronics and Communication Engineering, Chongqing University. His research interests include image/video processing, computer vision, and machine learning.

### Competing interests

The authors declare that they have no competing interests.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

## Authors’ Affiliations

## References

- E.J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theor.
**52**(2), 489–509 (2006)MathSciNetView ArticleGoogle Scholar - K.V. Siddamal, S.P. Bhat, V.S. Saroja,
*IEEE 2nd International Conference on Electronics and Communication Systems. A survey on compressive sensing*(2015), pp. 639–643Google Scholar - L. Gan,
*IEEE 12nd International Conference on Digital Signal Processing. Block Compressed Sensing of Natural Images*(2007), pp. 403–406Google Scholar - A.M. Bruckstein, D.L. Donoho, M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev.
**51**(1), 34–81 (2009)MathSciNetView ArticleGoogle Scholar - J. Mairal, M. Elad, G. Sapiro, Sparse representation for color image restoration. IEEE Trans. Image Process.
**17**(1), 53–69 (2007)MathSciNetView ArticleGoogle Scholar - S. Mun, J.E. Fowler,
*IEEE 17th International Conference on Image Processing. Block Compressed Sensing of Images Using Directional Transforms*(2010), p. 547Google Scholar - S. Kindermann, S. Osher, P.W. Jones, Deblurring and denoising of images by nonlocal functionals. SIAM J. Multiscale Model. Simul.
**4**(4), 1091–1115 (2005)MathSciNetView ArticleGoogle Scholar - X. Zhang, M. Burger, X. Bresson, Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM J. Imag. Sci.
**3**(3), 253–276 (2010)MathSciNetView ArticleGoogle Scholar - C. Chen, E.W. Tramel, J.E. Fowler,
*IEEE 46th Signals, systems and computers. Compressed-sensing recovery of images and video using multihypothesis predictions*(2012), pp. 1193–1198Google Scholar - J. Zhang, D. Zhao, C. Zhao, Image compressive sensing recovery via collaborative sparsity. IEEE J. Emerging Sel. Top. Circuits Syst.
**2**(3), 380–391 (2012)MathSciNetView ArticleGoogle Scholar - W. Dong, G. Shi, X. Li, Compressive sensing via nonlocal low-rank regularization. IEEE Trans. Image Process.
**23**(8), 3618–3632 (2014)MathSciNetView ArticleGoogle Scholar - J. Zhang, C. Zhao, D. Zhao, W. Gao, Image compressive sensing recovery using adaptively learned sparsifying basis via ℓ0 minimization. Signal Process.
**103**(10), 114–126 (2014)View ArticleGoogle Scholar - E.J. Candès, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted L1 minimization. J. Fourier Anal. Appl.
**14**(5), 877–905 (2007)MATHGoogle Scholar - M.S. Asif, J. Romberg, Fast and accurate algorithms for re-weighted ℓ1-norm minimization. IEEE Trans. Signal Process.
**61**(23), 5905–5916 (2013)MathSciNetView ArticleGoogle Scholar - S. Mun, J.E. Fowler,
*IEEE data compression conference. Residual reconstruction for block-based compressed sensing of video*(2011), pp. 183–192Google Scholar - C. Zhao, S. Ma, J. Zhang, R. Xiong, W. Gao, Video compressive sensing reconstruction via reweighted residual sparsity. IEEE Trans. Circuits Syst. Video Technol.
**27**(6), 1182–1195 (2017)View ArticleGoogle Scholar - T. Goldstein, The split Bregman algorithm for ℓ1 regularized problem. SIAM J. Imag. Sci.
**4**(2), 323–343 (2009)View ArticleGoogle Scholar - P. Deift, X. Zhou, A steepest descent method for oscillatory Riemann-Hilbert problems. Ann. Math.
**137**(2), 295–368 (1993)MathSciNetView ArticleGoogle Scholar - J. Zhang, C. Zhao, D. Zhao, Image compressive sensing recovery using adaptively learned sparsifying basis via L0 minimization. Signal Process.
**103**(10), 114–126 (2014)View ArticleGoogle Scholar - D.L. Donoho, De-noising by soft-thresholding. IEEE Trans. Inf. Theory
**41**(3), 613–627 (2002)MathSciNetView ArticleGoogle Scholar