 Research
 Open Access
 Published:
Reconstruction for blockbased compressive sensing of image with reweighted double sparse constraint
EURASIP Journal on Image and Video Processing volume 2019, Article number: 63 (2019)
Abstract
Block compressive sensing reduces the computational complexity by dividing the image into multiple patches for processing, but the performance of the reconstruction algorithm is decreased. Generally, the reconstruction algorithm improves the quality of reconstructed image by adding various constraints and regularization terms, namely prior information. In this paper, a reweighted double sparse constraint reconstruction model which combines the residual sparsity and ℓ1 regularization term is proposed. The residual sparsity aims to exploit the nonlocal similarity of image patches, and the ℓ1 regularization term is used to utilize the local sparsity of image patches. The resulting model is solved under the frame of split Bregman iteration (SBI). A large number of experiments show that the algorithm in this paper can reconstruct the original image efficiently and is comparable to the current representative compressive sensing reconstruction algorithm.
Introduction
Compressive sensing (CS) theory proposed by Candès et al. [1] breaks the limitation of Nyquist sampling theorem, that is, it can recover the original signal at a sampling rate lower than twice the bandwidth if the signal is sparse in some domain. CS theory has been widely used in various fields since its birth, such as nuclear magnetic resonance, image processing, analog to information conversion, compressive radar, etc. [2]
Suppose that x ∈ ℝ^{N} is a finite length signal and it is sparse or compressible. According to the CS theory [1], rather than observing the original signal x directly, a much lower linear measurements are acquired by the following random linear projection:
where y ∈ ℝ^{M} is the measurement of the signal, H ∈ ℝ^{M × N} represents the measurement matrix, and n denotes additive noise. Since the value M is much smaller than the value of N, the solution to the Eq. (1) is an illposed inverse problem.
In order to solve the problem effectively, prior knowledge is needed, which lead to the following regularizationbased optimization problem
where \( {\left\Vert Hxy\right\Vert}_2^2 \) is data fidelity term, Ψ(x) is a regularization term which have various choices such as ℓ0 \ ℓ1\total variation (TV), and λ is a regularization parameter.
Since natural images are almost always compressible, the application of compression sensing is natural. However, if compressive sensing is applied to the largesized image directly, the measurement matrix would be very “large,” which leads to huge amounts of memory and further resulting in high computational complexity. Therefore, the blockbased compressive sensing (BCS) method [3] is proposed. During the image processing of BCS, the image is divided into multiple image patches and each image patch is operated separately, so that the computational complexity is greatly reduced. Nevertheless, the quality of the reconstructed image is degraded compared to the quality of the image reconstructed by the entire image.
For the prior knowledge has a crucial influence on the performance of the image reconstruction algorithm, designing an effective regularization term is beneficial to make full use of image prior information and further improve the quality of the reconstructed image. The sparsity and nonlocal similarity, which are the most important properties of the images, are utilized to improve the quality of reconstructed images. The sparsity aims to represent the original image with a little nonzero value or approximate zero, more specifically, the sparsity is to organize the original image more sparsely in some domain [4, 5]. Currently, different predetermined transform basis, including discrete cosine transform (DCT), discrete wavelet transform (DWT), and so on, has been used to exploit the sparsity to further derive some reconstruction algorithm, such as smooth prediction Landweber of BCS based on DCT (BCSSPLDCT) [6] and BCS based on DWT (BCSSPLDWT) [6]. Furthermore, to enrich the texture and structure in recovered images [7, 8], the multihypothesis (MH) prediction [9] method which explores the nonlocal similarities was proposed. By sharing the similar idea [10,11,12] where nonlocal similarities are exploited to design the local sparsifying transform, these methods can achieve better recovery performance than the algorithms that were previously designed for BCS without using nonlocal similarities. However, the recovered images still contain some visual artifacts.
For better CS recovery, some researchers used the idea of weighting the signal. Candès [13] proposed a weighted scheme based on the magnitude of signals to get closer to ℓ0 norm, while still using ℓ1 norm in the optimization problems. In a similar manner, Asif et al. [14] adaptively assigned weight values according to the homotopy of signals.
To better reconstruct the original image both in smooth region and edge region, this paper proposes a method that combines the sparsity and nonlocal similarity by the means of reweighting. Firstly, the residual which represents the difference between the target image patch and the similar image patch is calculated, and it should be more sparse. Therefore, residual sparsity was applied to exploit the nonlocal similarity. Secondly, reweighting applied in the sparse constraint is to consider the sparsity difference of the different image patches to enhance the sparsity. Thirdly, we use the frame of SBI to transform the proposed model into several sub problems to solve it effectively.
The main contributions of this paper are as follows.

1.
We propose a reconstruction model that combines the residual sparsity and l_{1} regularization term by the way of reweighting. It utilizes local sparsity and nonlocal selfsimilarity simultaneously so that the model can further improve the performance of the reconstruction algorithm.

2.
To solve the model of reweighted double sparse constraint, we design an effective scheme based on the split Bregman iteration (SBI) algorithm. Extensive experiments of our method are conducted and compared with other typical algorithm using PSNR and SSIM.
The rest of this paper is organized as follows. Section 2 introduces the reconstruction model. There are four parts in Section 2, including residual model, reweighted sparse representation, weight estimation, and the solution of the proposed method. In Section 3, we present the plenty experiments and the results compared to other representative methods. Section 4 concludes the paper in the end.
Proposed method
In this section, we will give a detailed description of the proposed reweighted double sparse model. Firstly, to further control the computational complexity and memory requirement, we utilize the BCS [3] to divide the original image x into nonoverlapped patches represented by p_{k}, 1 ≤ k ≤ P of size B^{2}. MH prediction method [9] is used to recover the image initially. Then, in this paper, we divide the initial recovery x^{int} into D overlapped patches represented by x_{k}, 1 ≤ k ≤ D, where S^{2} is the patch size. This partitioning process is formulated as
where E_{k} is a matrix operator that extracts the patch x_{k} from x^{int} in an overlapped way. We consider the residual between each patch and the linear combination of its similar patches since the residual exhibits stronger sparsity [15]. Considering that different residuals have different sparsity, we use the way of reweighting. Simultaneously, the sparsity of the image patch itself is taken into account in the model of this paper and we enhance sparsity by reweighted ℓ1 minimization [13]. Thus the reweighted double sparse constraint model is expressed as
where λ_{1} and λ_{2} denote regularization constants, y represents the corresponding measurement, W_{1} and W_{2} are reweights and updated iteratively, u is the linear combination of the similar patches, and H is a matrix whose elements on the dialog are the measurement matrix Ψ of size M × S^{2}. The expression of H is written as
The residual model ‖W_{1}(x − u)‖_{1} exploits the nonlocal similarity and the ℓ1 regularization term of image patch ‖W_{2}x‖_{1} exploits the local selfsimilarity. W_{1} is used to discriminatively weight different residual coefficients, W_{2} is used to reweight ℓ1 minimization. Sparsity can be further enhanced by combining W_{1} and W_{2}. In the proposed model, the nonlocal similarity and the local selfsimilarity are combined effectively to further improve the reconstruction quality.
Next, we will introduce the model in detail, including the residual model, reweighted sparse representation, and weight estimation.
Residual model
In this paper, similar patches are searched within an L × L search window centered at the location of the patch x_{k}. We select C correlated patches based on similarity—the most similar C patches, denoted by m_{k, i}, 1 ≤ i ≤ C. The similarity between patches is measured by using the mean squared error (MSE) [16], which has an expression of
where S^{2} is the size of image patch.
The residual of the image patch x_{k} is obtained from the original value of the patch and the linear combination of its similar patches m_{k, i}, 1 ≤ i ≤ C and can be expressed as
where α_{k, i} is the weight of the similar patches and directly represents the accuracy of the similar patches. The patches that are more similar to the image patch x_{k} should be assigned greater weight. Therefore, α_{k, i} is proportional to the similarity between similar patches m_{k, i} and image patch x_{k}, that is
where h is a constant. By the Eq. (8), we can calculate the weight of similar patches to reflect the similarity efficiently.
Reweighted sparse representation
The sparsity of the signal residual in some domain is well accepted and adopted [15, 16]. At present, there are various sparse transform methods, such as Fourier transform, wavelet transform, Gabor transform, and so on. In this paper, we use the DCT transform to sparse the residual and the image patch for its high efficiency and low complexity. The expression of the sparse transform of the residual by the DCT is
where φ denotes the DCT basis, φ^{T} is the transpose of DCT basis, and \( {\overset{\smile }{x}}_k=\varphi {x}_k,\kern0.5em {\overset{\smile }{m}}_{k,i}=\varphi {m}_{k,i} \).
The residual is constrained by weighting, and the residual sparse expression obtained by combining Eq. 9 is as follows
where W_{1, k} is a diagonal matrix with \( {w}_{k,1},\dots, {w}_{k,{\mathrm{S}}^2} \) on the diagonal and zero elsewhere. Through Eq. (10), we can get the reweighted sparse representation of the residual. The calculation of the weight will be described in detail in the next section.
Weight estimation
In each iteration, we need to update the weights. The weights are used to constrain each residual value Φ(x_{k}). The more similar the patches m_{k, i} are to the target patch x_{k}, the greater the weight it takes. Therefore, the weight W_{1, k} in Eq. (10) is inversely proportional to the magnitude of the residual value [13]. In this paper, we can calculate the weight from the following equation
where \( {\overset{\smile }{u}}_k={\sum}_{i=1}^C{\alpha}_{k,i}{\overset{\smile }{m}}_{k,i} \).
For the image patch x_{k}, the expression of its reweighted sparse representation is
where W_{2, k} is a diagonal matrix which is similar to W_{1, k}, whose elements is \( {\widehat{w}}_{k,1},\dots, {\widehat{w}}_{k,{S}^2} \) on the diagonal. We use the image patch x_{k} to calculate the weight by
Solution to the proposed model
In this section, we will detail the solution procedure. Since Eq. (4) is an unconstrained problem, if the problem is solved directly, the calculation amount is large and the calculation complexity is high. To simplify calculations, we use the idea of the SBI algorithm to transform an unconstrained problem into several simpler subproblems and solve them separately [17]. Firstly, we will briefly introduce the main steps of the SBI algorithm. The expression of constraint optimization problem is as follows
where G ∈ R^{N × M}, f : ℝ^{N} → ℝ, g : ℝ^{M} → ℝ. The SBI algorithm solution process is shown in Algorithm 1.
Now, we are solving Eq. (4) in the SBI framework. We first need to introduce a variable z to convert Eq. (4) into an equivalent constraint expression
Then, invoking SBI, line 3 in algorithm 1 becomes
Line 4 in algorithm 1 becomes
Line 5 in algorithm 1 becomes
where t represents the number of iterations and η is a fixed value parameter in the SBI. Equation (4) has been transformed by the SBI algorithm into a z subproblem of Eq. (16) and an x subproblem of Eq. (17). Next, we will give the detailed solution procedure for two subproblems.
Z subproblem
For a given x, the z subproblem is a strictly convex quadratic function. In order to avoid calculating the inverse of the matrix and reduce the computational complexity of the algorithm, we use the steepest descent method (SD) [18] to solve this problem. For the Eq. (16), we have the iterative formula as
where g^{(t, i)} denotes a gradient, ρ^{(t, i)} indicates the optimal step. t and i represent the number of iteration numbers of the SBI and SD, respectively. The gradient and the optimal step are calculated as
For the above equation, we have
So, combining Eq. (20) and Eq. (22), we get
Then we put Eq. (23) into Eq. (21)
The gradient and optimal step are calculated by the above equation, and the output of the SD is an updated value of z^{(t, i _ max)}, where i _ max is 300 in our experimental setting.
X subproblem
For a given z, we can rewrite Eq. (17) as
where r^{(t)} = z^{(t + 1)} − b^{(t)}.
In each iteration, we assume that the difference between the elements of x and r^{(t)} satisfies the condition of an independent distribution with a mean of zero. Therefore, according to the theorem [19], we have
where K = D × S^{2}.
Due to the orthonormality of DCT and the Plancherel theorem, we get
Combining Eqs. (10), (12), (25), (26), and (27), we can get
which we decompose into D subproblems as follows
For the Eq. (29), we convert it to a scalar problem, that is
where θ_{1} = Kλ_{1}W_{1}/Nη, θ_{2} = Kλ_{2}W_{2}/Nη. The scalars y, d, v, k in Eq. (30) correspond to the four vectors \( {\overset{\smile }{x}}_k^{\left(t+1\right)},{{\overset{\smile }{r}}_{\mathrm{k}}}^{(t)},{\overset{\smile }{x}}_k,\overset{\smile }{u} \) in Eq. (29).
Through the soft threshold algorithm [20], we obtain the closedform solution of Eq. (30)
And the expressions of S_{1}(d) and S_{2}(d) are as follows
Through Eq. (32) and Eq. (33), we can solve the x subproblem and obtain values of \( {\overset{\smile }{x}}_k^{\left(t+1\right)} \). The value of the corresponding patch is calculated as
After obtaining all the patches x_{k}, 1 ≤ k ≤ D, we calculate the value of each pixel by averaging the patches at the pixel position. The expression of the reconstruction procedure is
After solving the z subproblem and the x subproblem, update b through Eq. (16) and repeat the three steps until all iterations are completed. The detailed description of the algorithm was given in Algorithm 2.
Results and discussion
In this section, extensive experiments are done to verify the performance of reconstructed image, and we compare the reconstruction quality between the proposed algorithm and other four algorithms. The model parameters are set as follows: the image patch size is 64, the size of the searching window is 20 × 20, and the rest of the parameters will be given in detail. All of the experiments in this paper are done on Intel (R) Core (TM) i3, 3.0 GHz CPU processor, and MATLAB 2012b on Windows 10 operating system.
In order to evaluate the quality of the reconstructed image, we adopt the peak signaltonoise ratio (PSNR) and structure similarity (SSIM) that are commonly used for objective assessments of image quality. Higher PSNR means that the reconstructed quality is better, and the original image is clearly reconstructed. The lower PSNR indicates that the reconstructed quality is poor, and there are problems such as edge blurring. SSIM is used to reflect the similarity between the original image and the reconstructed image in structure. The experimental results show the PSNR and SSIM of image reconstructed by the proposed algorithm and other four algorithms. All experimental standard greyscale images are given in Fig. 1.
Discussion
Complexity of search window size
A large searching size makes it possible to find patches that are more similar to the target patch so that a more accurate model could be obtained. However, it also brings about higher computational complexity. In the experiment, we tested the calculation time of image Boat at different ratios. We demonstrate the algorithm complexity for different search window size L in Fig. 2.
Effect of similar patches
In order to test the effect of the number of similar patches, experiments on three pictures with different C ranging from 2 to 24 are conducted. From the experimental results in Fig. 3, we can see that the proposed algorithm in this paper is insensitive to the number of similar patches, because all the curves are close to flat. For different images, the number of similar patches shows a certain degree of similarity. According to extensive experimental results, all test images can achieve the highest and most stable performance when C is 10. Therefore, we took a value 10 that is a little bigger than 8. Simultaneously, the reason why C does not take a value larger than 10 is to reduce the computational complexity based on obtaining stable performance.
Effect of regularization parameters
In this section, we test the effect of regularization parameters λ_{1} and λ_{2} to the performance of reconstructed image.
In order to study the effect of regularization parameters on the performance, we need to test them for different values, separately. λ_{1}, λ_{2} will affect the performance of the model at the same time; therefore, we need to determine one parameter and test the other parameter. It can be seen from Fig. 4 that if the magnitude of λ_{1}, λ_{2} is too large or too small, the image reconstruction performance will be affected. What is more, for different images, the effect of λ_{1}, λ_{2} to the PSNR shows consistency, that means, there are optimal regularization parameters λ_{1} and λ_{2} that make the performance of the algorithm best under the condition of different images. In this paper, we set λ_{1} = 2.5e − 3, λ_{2} = 2.5e − 4.
Next, to verify the high performance of the proposed algorithm, we discuss the case that the value of λ_{1} and λ_{2} are set to zero separately. The experimental results are shown in Fig. 5. For different images, the PSNR of images that reconstructed by separate sparse constraint terms is slightly lower to the PSNR of images that reconstructed by proposed model, that is, the performance of proposed model that combines the nonlocal similarity and local selfsimilarity is better than the performance of the models that utilize the nonlocal sparsity and the local selfsimilarity separately. The proposed model achieves the highest PSNR. For image Boat in Fig. 5, the PSNR of the proposed algorithm generally improves 0.01 dB and 1 dB in comparison with the separate sparse constraint models. For image Lena in Fig. 5, the PSNR of the proposed algorithm generally improves 0.06 dB and 0.4 dB in comparison with the separate sparse constraint models.
Effect of weight parameter
The value of the parameter h in the weight α_{k, i} influences the accuracy of the similar patches, and further affects the performance of the reconstructed image. Therefore, we take the value of h from 1 to 120 to test the reconstruction performance. A lot of experiments are done under three images. It can be seen from Fig. 6 that the value of the parameter h has a certain influence on the quality of the reconstructed image. The PSNR shows a certain stability and reaches the maximum value within the range from 70 to 120. Different test images show consistency, that is, different reconstructed images can achieve the best reconstruction performance in the same range of h. Therefore, in this paper, we set h = 80 based on the experimental results.
Reconstruction results
In this paper, we have the original image x ∈ ℝ^{N} and its corresponding measurement y ∈ ℝ^{M}. H is a measurement matrix whose size is M × N. Compressive sensing is intended to recover the original highquality image from the measured value with high probability. The measurement rate is expressed by the ratio and is equal to M/N.
In our simulation experiments, the measurements are obtained by applying a Gaussian random projection matrix to the original image. We present the reconstruction results of four representative compressive sensing reconstruction algorithms, including BCSSPLDCT [6] and BCSSPLDWT [6], MH method [9], and collaborative sparsity (CoS) method [11]. It is worth noting that the MH method and the CoS method are known as the current advanced algorithms in image CS reconstruction. We show the PSNR/SSIM of reconstructed images in Table 1.
We visually and intuitively compare the image reconstructed by the proposed algorithm with the images reconstructed by the other four algorithms. The comparison results are shown in Figs. 7, 8, 9, 10, 11, and 12.
The figures show the PSNR and SSIM of the images reconstructed by different reconstruction algorithms at ratios of 20%, 30%, and 40%, respectively. According to experimental results, it can be seen that the images reconstructed by the BCSSPLDCT and BCSSPLDWT are generally fuzzy, the texture structure is not clear enough, and the visual effects are worse than the other three algorithms. Images reconstructed by MH algorithm and the CoS algorithm all have better image quality, and the visual intuition effects are not much different. Especially the CoS algorithm reconstructs the image with stronger texture and higher quality. In terms of the PSNR, the images reconstructed by proposed algorithm are slightly higher than images reconstructed by the MH algorithm and CoS algorithm, that is, the algorithm proposed in this paper can effectively improve the quality of image reconstruction and is generally better than the above four classic compressive sensing reconstruction algorithms.
Conclusion
In this paper, we propose a reweighted double sparse constraint reconstruction model. The model not only takes full advantage of the nonlocal similarity the and local selfsimilarity by using the residual model and ℓ1 regularization to enhance the sparsity but also establishes the model by the means of reweighting to further improve the quality of reconstructed image. The model is mathematically defined as a solution to the reweighted ℓ1 minimization problem; an effective solution based on the SBI framework was designed to reduce the computational complexity. Under the SBI framework, the model is transformed from an unconstrained problem to multiple simple subproblems. In this paper, we use the steepest descent method and the soft threshold algorithm to solve these subproblems, separately. A large number of experiments demonstrate that the proposed image reconstruction model has better reconstruction quality and visual effect than other four representative algorithms. Future work includes image block matching optimization and reducing computational complexity.
References
 1.
E.J. Candès, J. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theor. 52(2), 489–509 (2006)
 2.
K.V. Siddamal, S.P. Bhat, V.S. Saroja, IEEE 2nd International Conference on Electronics and Communication Systems. A survey on compressive sensing (2015), pp. 639–643
 3.
L. Gan, IEEE 12nd International Conference on Digital Signal Processing. Block Compressed Sensing of Natural Images (2007), pp. 403–406
 4.
A.M. Bruckstein, D.L. Donoho, M. Elad, From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 51(1), 34–81 (2009)
 5.
J. Mairal, M. Elad, G. Sapiro, Sparse representation for color image restoration. IEEE Trans. Image Process. 17(1), 53–69 (2007)
 6.
S. Mun, J.E. Fowler, IEEE 17th International Conference on Image Processing. Block Compressed Sensing of Images Using Directional Transforms (2010), p. 547
 7.
S. Kindermann, S. Osher, P.W. Jones, Deblurring and denoising of images by nonlocal functionals. SIAM J. Multiscale Model. Simul. 4(4), 1091–1115 (2005)
 8.
X. Zhang, M. Burger, X. Bresson, Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM J. Imag. Sci. 3(3), 253–276 (2010)
 9.
C. Chen, E.W. Tramel, J.E. Fowler, IEEE 46th Signals, systems and computers. Compressedsensing recovery of images and video using multihypothesis predictions (2012), pp. 1193–1198
 10.
J. Zhang, D. Zhao, C. Zhao, Image compressive sensing recovery via collaborative sparsity. IEEE J. Emerging Sel. Top. Circuits Syst. 2(3), 380–391 (2012)
 11.
W. Dong, G. Shi, X. Li, Compressive sensing via nonlocal lowrank regularization. IEEE Trans. Image Process. 23(8), 3618–3632 (2014)
 12.
J. Zhang, C. Zhao, D. Zhao, W. Gao, Image compressive sensing recovery using adaptively learned sparsifying basis via ℓ0 minimization. Signal Process. 103(10), 114–126 (2014)
 13.
E.J. Candès, M.B. Wakin, S.P. Boyd, Enhancing sparsity by reweighted L1 minimization. J. Fourier Anal. Appl. 14(5), 877–905 (2007)
 14.
M.S. Asif, J. Romberg, Fast and accurate algorithms for reweighted ℓ1norm minimization. IEEE Trans. Signal Process. 61(23), 5905–5916 (2013)
 15.
S. Mun, J.E. Fowler, IEEE data compression conference. Residual reconstruction for blockbased compressed sensing of video (2011), pp. 183–192
 16.
C. Zhao, S. Ma, J. Zhang, R. Xiong, W. Gao, Video compressive sensing reconstruction via reweighted residual sparsity. IEEE Trans. Circuits Syst. Video Technol. 27(6), 1182–1195 (2017)
 17.
T. Goldstein, The split Bregman algorithm for ℓ1 regularized problem. SIAM J. Imag. Sci. 4(2), 323–343 (2009)
 18.
P. Deift, X. Zhou, A steepest descent method for oscillatory RiemannHilbert problems. Ann. Math. 137(2), 295–368 (1993)
 19.
J. Zhang, C. Zhao, D. Zhao, Image compressive sensing recovery using adaptively learned sparsifying basis via L0 minimization. Signal Process. 103(10), 114–126 (2014)
 20.
D.L. Donoho, Denoising by softthresholding. IEEE Trans. Inf. Theory 41(3), 613–627 (2002)
Acknowledgements
The authors thank the editor and reviewers.
Funding
Financial support for this work was provided by the National Natural Science Foundation of China [grant number 61501069] and the Fundamental Research Funds for the central Universities [Grant number 106112016CDJXZ168815].
Availability of data and materials
The datasets supporting the conclusions of this article are included within the article.
Author information
Affiliations
Contributions
YZ conceived and designed the experiments. JZ and XC performed the experiments and analyzed the data. GH and ZZ wrote the paper. ZH put forward constructive comments. All authors read and approved the final manuscript.
Corresponding author
Correspondence to Yuanhong Zhong.
Ethics declarations
Authors’ information

1.
Yuanhong Hong received his BS, MS, and PhD degrees in commutation engineering from Chongqing University, Chongqing, China in 2003, 2006, and 2011, respectively. He is currently an Associate Professor with the School of Microelectronics and Communication Engineering, Chongqing University. His research interests include image processing, machine learning and computer vision.

2.
Jing Zhang received her BS degree in commutation engineering from Chongqing University, Chongqing, China in 2018. And now she is a postgraduate in School of Microelectronics and Communication Engineering at Chongqing University.

3.
Xinyu Cheng received his BS degree in electronic information engineering from Chongqing University, Chongqing, China in 2018. And now he is a postgraduate in School of Microelectronics and Communication Engineering at Chongqing University.

4.
Guan Huang received her BS degree in commutation engineering from Chongqing University, Chongqing, China in 2018. And now she is a postgraduate in School of Automotive Engineering at Chongqing university.

5.
Zhaokun Zhou received his BS degree in commutation engineering from Chongqing University, Chongqing, China in 2018. And now he is a postgraduate in School of Automotive Engineering at Chongqing university.

6.
Zhiyong Huang received the B.Sc. degree in electric engineering and the Ph.D. degree in electronic engineering from Chongqing University, Chongqing, China, in 2001 and 2009, respectively. He is currently an Associate Professor with the School of Microelectronics and Communication Engineering, Chongqing University. His research interests include image/video processing, computer vision, and machine learning.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Zhong, Y., Zhang, J., Cheng, X. et al. Reconstruction for blockbased compressive sensing of image with reweighted double sparse constraint. J Image Video Proc. 2019, 63 (2019) doi:10.1186/s1364001904641
Received:
Accepted:
Published:
Keywords
 Image reconstruction
 Compressive sensing
 Reweighted double sparse constraint