Lossy image compression based on prediction error and vector quantisation
© The Author(s). 2017
Received: 15 July 2016
Accepted: 10 May 2017
Published: 18 May 2017
Lossy image compression has been gaining importance in recent years due to the enormous increase in the volume of image data employed for Internet and other applications. In a lossy compression, it is essential to ensure that the compression process does not affect the quality of the image adversely. The performance of a lossy compression algorithm is evaluated based on two conflicting parameters, namely, compression ratio and image quality which is usually measured by PSNR values. In this paper, a new lossy compression method denoted as PE-VQ method is proposed which employs prediction error and vector quantization (VQ) concepts. An optimum codebook is generated by using a combination of two algorithms, namely, artificial bee colony and genetic algorithms. The performance of the proposed PE-VQ method is evaluated in terms of compression ratio (CR) and PSNR values using three different types of databases, namely, CLEF med 2009, Corel 1 k and standard images (Lena, Barbara etc.). Experiments are conducted for different codebook sizes and for different CR values. The results show that for a given CR, the proposed PE-VQ technique yields higher PSNR value compared to the existing algorithms. It is also shown that higher PSNR values can be obtained by applying VQ on prediction errors rather than on the original image pixels.
In recent decades, since huge volumes of image data are transmitted over networks, image compression has become one of the most important research areas [1, 2]. With the rapid development of digital image technology, the demand to save raw image data for further image processing or repeated compression is increasing. Moreover, the recent growth of large volume of image data in web applications has not only necessitated the need for more efficient image processing techniques but also for more efficient compression methods for storage and transmission applications[1, 2]. The goal of the image compression is not only to reduce the quantity of bits needed to represent the images but also to adapt the image quality in accordance with the users’ requirements [3, 4].
A variety of compression techniques has been proposed in the recent years [2, 4, 5]. Similarly, prediction techniques have been widely used in many applications including video processing [6, 7]. A new algorithm which makes use of prediction error and vector quantization concepts, denoted as PE-VQ method is proposed in this paper. Artificial neural network (ANN) [8, 9] is employed for prediction. Instead of the original image pixel value, only the prediction error (PE) which is the difference between the original and predicted pixel values are used in the compression process. Identical ANNs are employed both at the compression and decompression stages. In order to achieve compression, vector quantization (VQ) is applied to the prediction errors. Vector quantization has become one of the most popular lossy image compression techniques due to its simplicity and capability to achieve high compression ratios [10, 11]. VQ involves finding the closest vector in the error codebook to represent a vector of image pixels. When the closest vector is identified, only the index of the vector in the codebook is transmitted thus achieving compression. It is found that higher PSNR values can be obtained by applying VQ on the prediction errors instead of on the original image pixels.
The VQ approach employed in this paper makes use of a novel technique which applies a combination of two algorithms, namely, artificial bee colony (ABC) algorithm  and genetic algorithm (GA)  for constructing the codebook. It is shown that the use of this new method yields a better optimised error codebook compared to the existing methods.
The remaining of the paper is organised as follows: Section 2 describes the related works. The proposed methodology is discussed in Section 3. Section 4 describes the databases used for the evaluation of the proposed method. Experimental results and discussions are given in Section 5 and conclusions presented in Section 6.
2 Related work
Generally compression methods can be categorised into two classes as lossy and lossless methods. Lossy compression can achieve high compression ratios (CR) compared to lossless compression methods. Several lossy methods which focus mostly on compression ratios without much consideration for image quality have been published [15, 16]. A lossy hyperspectral image compression scheme, which is based on intra-band prediction and inter-band fractal encoding has been reported in . Chen Yaxiong et al. (2016)  have proposed a lossy image compression scheme which combines principal component analysis and countourlet transform to achieve high compression ratio along with better image quality.
Engelson et al. (2000)  have proposed lossless and lossy delta compression schemes for time series data. Lindstrom et al. (2006)  have proposed a prediction-based lossless and lossy floating-point compression schemes for 2D and 3D grids in which the authors have used Lorenzo predictor and fast entropy encoding scheme. In , Zhang et al. (2008) have presented an image compression method, called prediction by partial approximate matching (PPAM) method which involves four basic steps, namely, preprocessing, prediction, context modelling, and arithmetic coding. In , B.T.A Munteanu, and Peter Schelkens (2015) have proposed a lossy compression scheme for medical images which makes use of a generic codec framework that supports JPEG 2000 with its volumetric extension (JP3D), and directional wavelet transforms.
In , L. T. Chuen and C.Y. Chang (2010) have surveyed the recent developments in vector quantization codebook generation schemes which include enhanced Linde-Buzo-Gray method, neural network and genetic based algorithms. H.S Morteza and A.R.N. Nilchi (2012)  have proposed a context-based method to overcome the contextual vector quantization challenges. In this paper, the authors have identified the regions of interest first, applied low compression to the identified regions with high compression applied to the background regions. Horng and Ming-Huwi (2012)  have proposed a method based on the firefly algorithm to construct the codebook. Thepade Sudeep and Ashish Devkar (2015)  have proposed a vector quantization (VQ)-based lossy image compression method. In this paper, the authors have used a new codebook generation technique called Thepade’s Hartley error vector rotation method.
In , Mohamed Uvaze et al. have proposed a codebook based compression method called IP-VQ method where the codebook is constructed based on the original image pixels. A new method called PE-VQ method is presented in this paper which differs from the IP-VQ method in the sense that the codebook construction is done based on the prediction errors rather on the image pixels. It is expected that a prediction error based codebook can achieve higher compression ratios as well as higher PSNR values compared to the image pixel based codebook due to the following reasons: (i) In the PE-VQ method, the size of the code words in the codebook will be less compared to the size in the IP-VQ since they represent only the error values instead of the actual pixel values. (ii) In the PE-VQ method, since the error values use less number of bits compared to the IP-VQ method, the code word computation will be more accurate. It may be noted that in both PE-VQ and IP-VQ methods, the codebook construction is based on the same hybrid method which makes use of a combination of ABC and genetic algorithms.
3 Proposed methodology
3.1 Artificial neural network
3.1.1 Prediction process
The prediction process is implemented in a lossless manner as follows:
The predicted values are rounded-off to the nearest integers in order to make the prediction error values also integers. The rounding-off operation is carried out both at the compression and decompression stages.
In the proposed method, the loss occurs only during the vector quantization process.
3.2 Vector quantization of prediction errors based on codebook
Eq. (3) is computed for v = 1, 2… S where S is the size of the error codebook. The matching ECW (C v ) for the error code vector Xu (i.e., error block u) is identified as the one corresponding to the index value v which yields the minimum value for Ε u. This index value will then be stored in the memory instead of the error block u.
Codebooks of sizes S = 128, 256 and 512 are considered in this work with K = 8 bits. The compression ratios obtained for these cases will be (8/7) L, L and (8/9) L respectively.
3.3 ABC-GA technique for error codebook formation
In the proposed method, ABC-GA technique  is used to determine the error code words which constitute the error codebook. The proposed ABC-GA technique combines the best features of the two algorithms, namely ABC and GA algorithms [13, 14].
ABC algorithm employs three types of bees or phases, namely, employed bees (EB), onlookers bees (OB) and scout bees (SB). In the EB phase, initial set of ECVs, among the available ECVs is selected based on fitness value. In the OB phase, the best ECVs among the set of ECVs are identified using the greedy approach. In the SB phase, ECWs are selected based on the ECVs identified in the EB and OB phases. The problem with ABC algorithm is that the best ECVs may not get selected in the OB phase due to local minima problem. In-order to overcome this problem, the OB phase is replaced by GA. The GA method uses genetic operands (crossover and mutation) to determine the best ECVs from a set of initial and new crossover ECVs. On the other-hand, if GA method is applied independently for ECW selection, it will take more memory space and will also take longer time for computation since new ECVs are to be generated based on the crossover of all the original ECVs.
In general, the GA method takes more time to converge, though its accuracy is high. On the other hand, the ABC method takes less time for convergence but its accuracy of convergence is low as it suffers from the problem of falling into local minima values. The efficiencies of these two algorithms can be improved by overcoming the limitations cited above. For this purpose, we make use of a combination of ABC and GA algorithms called ABC-GA method in which the GA is introduced in the OB phase of the ABC algorithm.
Pseudocode.1. Steps to implement ABC-GA
1: Acquire the set of ECVs
2: Initialize one EB for each ECV
a) Determine neighbour vectors of EB
b) Approximately calculate the fitness for each ECV
3: Apply genetic algorithm in OB phase
a) Produce new ECV from existing ECVs by crossover operation
b) Compute the best fitness value (BFV) and normalize the probability value
c) Apply mutation to form new set of ECW based on BFV
4: Generate a next combination of ECV in place of rejected ECV on scout bees phase and repeat 2 and 3 until requirements are met
5: Formulate the ECB with the computed ECWs
3.4 Computational complexity
The proposed method involves computationally intensive operations such as prediction and vector quantization both at the encoding and decoding stages. The prediction process makes use of ANNs which require training using algorithms such as Levenberg-Marquardt, Resilient Backpropagation, and BFGS quasi-Newton backpropagation and also trial and error experiments to identify the optimal configuration based on the root mean square error (RMSE) values. Similarly the vector quantization process involves codebook generation using ABC and genetic algorithms. Hence it is expected that the computational complexity of the proposed method will be higher when compared to JPEG 2000.
4 Database description
The performance of the proposed PE-VQ method is evaluated using three test databases, namely, CLEF med 2009 database (DB1) , Corel database (DB2)  and standard test images database (DB3)  in terms of CR and peak signal-to-noise ratio (PSNR)  values. These results are compared with those obtained by the existing algorithms, namely, IP-VQ method , JPEG2000 method , singular value decomposition and wave difference reduction (SVD & WDR) method .
4.1 CLEF med2009 database (DB1)
Skull front view (100)
Skull left view (100)
Skull right view (50)
Radius and ulna bones (100)
Mammogram right (100)
Mammogram left (100)
Pelvic girdle (100)
Hand X-ray images (50)
Pelvic girdle + back bone (50)
4.2 Corel database (DB2)
4.3 Standard images (DB3)
5 Experimental results and discussion
In the experiments conducted, it is assumed that the original image is grey scale image with each pixel represented by 8 bits. From the experimental results, it is found that the error values range from −32 to 32 and hence the prediction errors are represented with 6 bits. Different values of CR are obtained by varying the size of ECV (L) in Eq. (5). The ANN parameters such as weights and biases, activation functions, training algorithm used, number of input and hidden layer neurons are assumed to be available both at the compression and decompression stage, hence their values are not considered while computing the CR values. For codebook generation, 60% of the total available images (100) are used for training and all the 100 images are used for testing for group images such as DB1 and DB2. In the case of individual images, the same image is used for training as well as for testing.
5.1 Performance evaluation using DB1
CR and PSNR values obtained for DB1 (codebook size = 256)
Radius and Ulna bones (100)
Pelvic girdle (100)
Skull front view (100)
Skull right view (50)
Skull left view (100)
Mammogram left (100)
Mammogram right (100)
Hand X-ray images (50)
Pelvic + back bone (50)
The results in Table 1 show that the proposed PE-VQ method performs well with the medical images, since PSNR values greater than 57, 52, 47, 45, and 42 are achieved for CR = 10, 20, 40, 60, and 80, respectively.
We note from Fig. 12 that by applying VQ on prediction errors, we are able to get higher PSNR values compared to those obtained by applying VQ on original pixel values.
5.2 Performance evaluation using DB2
CR and PSNR values obtained for DB2 (codebook size = 256)
Table 2 results show that the proposed PE-VQ method performs well with the DB2, since PSNR values greater than 51, 48, 45, 42, and 39 are achieved for CR = 10, 20, 40, 60, and 80, respectively.
5.3 Performance evaluation using DB3
CR and PSNR values obtained for standard images (DB3)
Table 3 results show that the proposed PE-VQ method performs well with the DB3, since PSNR values greater than 51, 49, 46, 43, and 40 are achieved for CR = 10, 20, 40, 60, and 80, respectively.
The results obtained for DB3 are also similar to those obtained for DB1 and DB2. From Figs. 11, 13, and 15 (corresponding to DB1, DB2, and DB3 respectively), it is clear that higher PSNR values are obtained for ECB size = 512 compared to 256 and 128. It is due to the fact that for a given ECV, the probability of finding a more optimum error code word in ECB increases with increasing ECB size which results in higher PSNR values. In other words, a more optimum codebook is obtained with higher ECB size.
From Figs. 12, 14, and 16 (corresponding to DB1, DB2, and DB3, respectively), it may be noted that PE-VQ method yields higher PSNR values compared to IP-VQ method. As the size of the error code word in PE-VQ method (6 bits) is less compared to the image pixel code word size in IP-VQ method (8 bits), the distortion error Eu given in Eq. (5) is reduced resulting in higher PSNR values. In other words, a more optimum error codebook is obtained in the PE-VQ method when compared to IP-VQ method.
From Fig. 17, it is clear that the proposed PE-VQ method yields better PSNR values compared to other known methods. The reason for achieving higher PSNR values using the proposed PE-VQ method can be explained as follows.
Compression is achieved in the PE-VQ method in two stages, namely, prediction and vector quantization (codebook formation). As mentioned in section 3.1.1, the prediction process is implemented in a lossless manner and hence no quality degradation occurs due to this process. In the second stage, the codebook is formed based on prediction errors and not based on original image pixels as in the other methods [12, 32, 33]. Due to this, the sizes of the clusters in the codebook formation process become smaller and hence it becomes possible to identify a more accurate code word for each cluster. In addition, in the proposed method, an efficient algorithm (ABC-GA) is used to locate the cluster centres more accurately. Since the overall loss is minimized in the compression process, the PE-VQ method is able to achieve higher PSNR values compared to other methods.
The proposed PE-VQ method can be extended to color images also by decomposing it into the three basic color components R, G, and B and applying the proposed method to each color component separately.
The results presented in Section 5.1, Section 5.2, Section 5.3 are obtained without considering the additional side parameters to be transmitted. We need to transmit the following ANN and VQ parameters.
Number of input neurons (3) = 2 bits
Input edge weights (30*32) = 120 bytes
Output edge weights 10*32 = 40 bytes
Bias at hidden layer (10*32) = 40 bytes
Transfer (activation) function index = 2 bits
Training function index = 4 bits
Initial image sample inputs(512 + 512) = 1024 bytes
Number of bytes needed to transfer ANN parameters are 1225 bytes.
Maximum number of bits required for the PE’s = 6 bits
Size of code word = 10,20,40,60,80
Codebook size = 128,256,512
Number of bytes needed to transfer VQ parameters vary from 960 (6*10*128) to 30,720 (6*80*512)
It is found that when sending group of images (such as DB1), the CR values in the lower range 10 to 40 are not affected significantly. For CR values in the higher range (40-80), the maximum reduction in CR values is around 8%.
It is noticed from the Figs. 18 and 19, the maximum CR value that can be obtained with the proposed method is only 25(approx.). For CR values >25, the significant increase in the number of side parameters causes a reduction in the CR value. Further it may be observed that for CR values up to 25, the PE-VQ method yields higher PSNR values compared to the existing methods, namely, BPG  and JPEG2000 .Thus, it can be concluded that the proposed method can be used with advantage for applications where the CR values are less than 25. On the other hand, there is no such constraint for the PE-VQ method when sending group of images such as DB1 since the side parameters do not make any significant change. Hence, when sending group of images, the proposed method has an advantage over the other methods as it yields higher PSNR values without any restriction on the CR value.
A new lossy image compression scheme called PE-VQ method which makes use of prediction errors with a codebook has been presented. The codebook has been constructed using a combination of artificial bee colony and genetic algorithms. The proposed method has been tested using three types of image databases, namely, CLEF med 2009, Corel 1 k database and standard images. The experimental results show that the proposed PE-VQ method yields higher PSNR value for a given compression ratio when compared to the existing methods. The prediction error-based PE-VQ method is found to achieve higher PSNR values compared to those obtained by applying VQ on the original image pixels.
The authors thank Multimedia University, Malaysia for supporting this work through Graduate Research Assistantship Scheme. They also wish to thank the anonymous reviewers for their comments and suggestions which helped in enhancing the quality of the paper.
This work is supported by the Multimedia University, Malaysia.
CE contributed towards the general framework of the proposed prediction based model for image compression. AMUA carryout the implementation of the research idea and tested the proposed method using three different datasets. RK carried out the analysis of the experimental results and checked the correctness of the algorithms applied. All the three authors took part in writing and proof reading the final version of the paper.
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
- DM Chandler, “Seven challenges in image quality assessment: past, present, and future research,” ISRN Signal Processing, 2013Google Scholar
- G Vijayvargiya, S Silakari, R Pandey, “A survey: various techniques of image compression,” arXiv preprint arXiv:1311.687, 2013Google Scholar
- D Taubman, High performance scalable image compression with EBCOT. IEEE Trans. Image Process. 9(7), 1158–1170 (2000)View ArticleGoogle Scholar
- S Masood, M Sharif, M Yasmin, M Raza, S Mohsin, Brain image compression: a brief survey. Res. J. Appl. Sci. 5, 49–59 (2013)Google Scholar
- Blelloch GE, "Introduction to data compression," Computer Science Department, Carnegie Mellon University, 2001. https://karczmarczuk.users.greyc.fr/matrs/Dess/RADI/Refs/compression.pdf.
- C Yan, Y Zhang, F Dai, X Wang, L Li, Q Dai, Parallel deblocking filter for HEVC on many-core processor. Electron. Lett. 50(5), 367–368 (2014)View ArticleGoogle Scholar
- C Yan, Y Zhang, F Dai, J Zhang, L Li, Q Dai, Efficient parallel HEVC intra-prediction on many-core processor. Electron. Lett. 50(11), 805–806 (2014)View ArticleGoogle Scholar
- D Svozil, V Kvasnicka, J Pospichal, Introduction to multi-layer feed-forward neural networks. Chemom. Intel. Lab. Syst. 39(1), 43–62 (1997)View ArticleGoogle Scholar
- B Karlik, AV Olgac, Performance analysis of various activation functions in generalized MLP architectures of neural networks. Int. J. Artif. Intell. Expert. Syst. 1(4), 111–122 (2011)Google Scholar
- TC Lu, CY Chang, A survey of VQ codebook generation. J. Inf. Hiding Multimedia Signal. Process. 1(3), 190–203 (2010)Google Scholar
- HB Kekre, T Sarode, Two level vector quantization method for codebook generation using Kekre’s proportionate error algorithm. Int. J. Image Process. 4(1), 1–10 (2010)View ArticleGoogle Scholar
- Mohamed Uvaze Ahamed A, Eswaran C and Kannan R, "Lossy Image Compression based on Vector Quantization using Artificial Bee Colony and Genetic Algorithms," International Conference on Computational Science and Technology, Kota Kinabalu, Sabah, November 2016.(To be published in the journal: Advanced Science Letters)Google Scholar
- D Karaboga, B Basturk, Artificial bee colony (ABC) optimization algorithm for solving constrained optimization problems, in International fuzzy systems association world congress (Springer, Berlin Heidelberg, 2007), pp. 789–798Google Scholar
- U Maulik, S Bandyopadhyay, Genetic algorithm-based clustering technique. Pattern Recognit. 33(9), 1455–1465 (2000)View ArticleGoogle Scholar
- CH Son, JW Kim, SG Song, SM Park, YM Kim, Low complexity embedded compression algorithm for reduction of memory size and bandwidth requirements in the JPEG2000 encoder. IEEE Trans. Consum. Electron. 56(4), 2421–2429 (2010)View ArticleGoogle Scholar
- YX Lee, TH Tsai, An efficient embedded compression algorithm using adjusted binary code method, in Proceedings of the IEEE international symposium on circuits and system, 2008, pp. 2586–2589Google Scholar
- D Zhao, S Zhu, F Wang, Lossy hyperspectral image compression based on intra-band prediction and inter-band fractal encoding, Computers & Electrical Engineering, 2016Google Scholar
- Chen Y, Huang Z, Sun H, Chen M and Tan H, "Lossy Image Compression Using PCA and Contourlet Transform," In MATEC Web of Conferences, Vol. 54, EDP Sciences, 2016. http://search.proquest.com/openview/18817d58241afa784bacda038cedfb0c/1?pq-origsite=gscholar&cbl=2040549.
- V Engelson, D Fritzson, P Fritzson, Lossless compression of high-volume numerical data from simulations, in Data Compression Conference, 2010, p. 574Google Scholar
- P Lindstrom, M Isenburg, Fast and efficient compression of floating-point data. IEEE Trans. Vis. Comput. Graph. 12(5), 1245–1250 (2006)View ArticleGoogle Scholar
- Y Zhang, DA Adjeroh, Prediction by partial approximate matching for lossless image compression. Image Processing. IEEE Trans. Image Process. 17(6), 924–935 (2008)MathSciNetView ArticleGoogle Scholar
- T Bruylants, A Munteanu, P Schelkens, Wavelet based volumetric medical image compression. Signal Process. Image Commun. 31, 112–133 (2015)View ArticleGoogle Scholar
- SM Hosseini, AR Naghsh-Nilchi, Medical ultrasound image compression using contextual vector quantization. Comput. Biol. Med. 42(7), 743–750 (2012)View ArticleGoogle Scholar
- MH Horng, Vector quantization using the firefly algorithm for image compression. Expert Syst. Applications 39(1), 1078–1091 (2012)View ArticleGoogle Scholar
- S Thepade, A Devkar, Appraise of similarity measures in image compression using vector quantization codebooks of Thepade's Cosine and Kekre's Error Vector Rotation, in IEEE International Conference on Pervasive Computing (ICPC), 2015, pp. 1–6Google Scholar
- A Mohamed Uvaze Ahamed, C Eswaran, R Kannan, CBIR system based on prediction errors. J. Inf. Sci. Eng. 33(2), 347–365 (2017)Google Scholar
- Y Linde, A Buzo, R Gray, An algorithm for vector quantizer design. IEEE Trans. Commun. 28(1), 84–95 (1980)View ArticleGoogle Scholar
- TM Deserno, S Antani, L Rodney Long, Content-based image retrieval for scientific literature access. Methods Inf. Med. 48(4), 371 (2009)View ArticleGoogle Scholar
- Database URL http://wang.ist.psu.edu/docs/related/
- Database URL http://sipi.usc.edu/database/database.php?volume=misc
- Olson DL and Delen D, "Advanced data mining techniques," Springer Science & Business Media, 2008, pp. 138-145. https://books.google.com/books?hl=en&lr=&id=2vb-LZEn8uUC&oi=fnd&pg=PA2&dq=Olson+DL+and+Delen+D,+%22Advanced+data+mining+techniques,&ots=zVa-W92RoP&sig=kEiFlEf6-rdNUAAAFJ5JM4Z7Pcs.
- NN Ponomarenko, VV Lukin, KO Egiazarian, L Lepisto, Adaptive visually lossless JPEG-based color image compression. SIViP 7(3), 437–452 (2013)View ArticleGoogle Scholar
- AM Rufai, G Anbarjafari, H Demirel, Lossy image compression using singular value decomposition and wavelet difference reduction. Digital Signal Process. 24, 117–123 (2014)View ArticleGoogle Scholar
- E Kougianos, SP Mohanty, G Coelho, U Albalawi, P Sundaravadivel, Design of a high-performance system for secure image communication in the Internet of Things. IEEE Access 4, 1222–1242 (2016)View ArticleGoogle Scholar