 Research
 Open Access
 Published:
Poweroptimized logbased image processing system
EURASIP Journal on Image and Video Processing volume 2014, Article number: 37 (2014)
Abstract
The continuous development of devices such as mobile phones and digital cameras has led to a higher amount of research being dedicated to the image processing field. Today's imageacquiring tools require batteryoperated power, and hence, power optimization becomes a major factor to be considered in the hardware implementation of image systems. This paper proposes an image processing system which utilizes set partitioning in hierarchical trees (SPIHT)integrated discrete wavelet transform (DWT) structure for image processing. The overall advantage of this proposal is achieved by modifying the arithmetic units in the DWT structure. Utilizing a logarithmbased floating point unit (FPU) in the DWT computation structures, the logarithmic number system (LNS) adaptation in the arithmetic unit results in overall accuracy enhancement with reduced area and power consumption. To ensure the versatility of the proposal and for further evaluating the performance and correctness of the structure, the model is implemented using Xilinx and Altera fieldprogrammable gate array (FPGA) devices. The analyses obtained from the implementation show that the structure incorporated with the logbased FPU is 25% more accurate with 47% reduced power consumption than the integerstyled FPU incorporated DWTs, along with enhanced speed and optimal area utilization.
1 Introduction
Discrete wavelet transform (DWT) is increasingly being used for image coding. In particular, biorthogonal symmetric wavelets manifested remarkable abilities in still image compression. Hence, this paper proposes an image processing system by focusing on the biorthogonal 9/7 DWT structure. DWT has traditionally been implemented using the convolution method. This implementation demands a large number of computations and storage features that are not desirable for highspeed or lowpower applications. Swelden [1] proposed a new mathematical formulation for wavelet transformation based on spatial construction of the wavelets, and a very versatile scheme for its factorization has been suggested in [2]. This approach is called the liftingbased wavelet transform. The main feature of the liftingbased DWT scheme is to break up highpass and lowpass filters into a sequence of upper and lower triangular matrices and convert the filter implementation into banded matrix multiplications. This scheme has several advantages when compared to the convolution techniques, which includes ‘inplace’ computation of the DWT, symmetric forward, and inverse transform. Therefore, the DWT implemented using the lifting scheme in the JPEG 2000 standard are the biorthogonal lossless 5/3 integer and the lossy 9/7 floating point filter banks. Numerous architectures have been proposed in order to provide lowpower, highspeed, and areaefficient hardware implementation for DWT computation [3–16]. Shi et al. [6] proposed efficient folded architecture (EFA) with low hardware complexity. The flipping structure is another important DWT architecture that was proposed by Huang et al. [7]. A highspeed, reducedarea twodimensional (2D) DWT architecture was proposed by Zhang et al. [10]. While most of these architectures are related to research involved in the optimization of critical paths, only some of them, such as Lee et al. [16], deal not only with the internal data path but also with the coefficient precision optimization.
This paper focuses on lossy biorthogonal 9/7 liftingbased DWT. This yields higher computational complexity with floating point computations. The implementation of this structure in hardware requires an additional complex hardware to handle the floating point computations. This demands a separate unit for its processing, which leads to the design of the floating point unit (FPU). By exploring the existing FPUs, the phenomenon of arithmetic computations are still the same as ordinary arithmetic logic unit (ALU) operations, acting like an additional prop up for normal ALUs. An islandstyle with embedded FPU [17] is proposed by Beauchamp et al., while a coarsegrained FPU was suggested by Ho et al. [18]. Even et al. [19] suggests a multiplier for performing on either singleprecision or doubleprecision floating point numbers. An optimized FPU in a hybrid FPGA was suggested by Yu et al. [20] and a configurable multimode FPU for FPGAs by Chong and Parameswaran [21]. Performance improvisation and optimization of these suggested models are studied and employed in each successive development time frame. However, while these models fine tune the FPU in terms of area, there were no suggestions for power reduction or accuracy enhancements. Anand et al. [22] proposed a log lookup table (LUT)based FPU, which utilizes a logarithmic principle to achieve good accuracy with reduced power consumption. However, this model has some serious drawbacks, which include increased delay and additional memory for the log LUT handling. The above factors affect the performance in terms of area and speed. Hence, this proposed scheme suggests an efficient model for performing floating point operations to reduce power consumption by reducing the operation complexities using log conversion [23].This reduces the overall computation burden, as the process is simply a numerical transformation to the logarithmic domain. Thus, a reduction in power consumption and increased accuracy is attained with optimal area usage [24]. The mere mapping of floating point numerals is not possible, and hence, a standardized form is adopted by using IEEE 754 singleprecision floating point standard [25]. An optimized DWT architecture with logbased FPU is proposed, and a preliminary version of this work was presented in [26]. This paper revises the external memory access, and a more accurate and detailed error analysis and the simulation results are given.
After the liftingbased DWT was introduced, several coding algorithms were proposed to code the wavelet coefficients into an efficient result, while taking storage space and redundancy into consideration. These algorithms are embedded zerotree wavelet (EZW), embedded block coding with optimized truncation (EBCOT), and set partitioning in hierarchical trees (SPIHT). Among these, the SPIHT algorithm is most preferable because of its lowcomputational complexity and better image compression performance. The SPIHT coding, proposed by Said and Pearlman in 1996 [27], does not required arithmetic coding and provides a cheaper and faster hardware solution. It was modified by Wheeler and Pearlman [28] by making a no list SPIHT (NLS) to reduce memory usage. Later, Corsonello et al.[29] proposed a lowcost implementation of NLS in order to improve the coding speed. The work in [30] modified the scanning process and utilized fixed memory allocation for the data list to reduce the hardware complexity. In order to achieve high throughput, Cheng et al. [31] proposed a modified SPIHT that processes a 4 × 4 bit plane in 1 cycle. Fry and Hauck [32] improvised this model with a bit plane parallel SPIHT encoder architecture to further increase the throughput. By the year 2013, Jin and Lee [33] proposed a blockbased passparallel SPIHT (BPS) algorithm, which employs pipelining and parallelism. This scheme has the highest throughput among the existing architectures. Hence, we espouse the BPS in our image processing core.
This proposal introduces an enhanced image processing system, which utilizes a lowpower DWT structure along with a logbased FPU and BPS coder. The optimized decomposition level of DWT is selected based on performance parameters such as peak signaltonoise ratio, compression ratio, and computational complexity. To examine the specific hardware performance and tradeoffs associated with the solutions presented here, the architecture is first verified in Matlab for the image parameters. In addition to this, the hardware implementation is carried out using Verilog hardware description language (HDL) and synthesized using Xilinx and Altera FPGA families to verify its device level performance based on VLSI parameters.
The rest of the paper flow is given in brief as follows. Section 2 gives the background supporting the basic understanding of liftingbased discrete wavelet transform and SPIHT coding techniques. Section 3 pursues with the hardware implementation of forward 2D DWT with modified computation unit adopting logbased FPU and SPIHT coders. Detailed experimental setup for the proposed realtime image processing system and the performance of the proposed architecture is assessed and compared with that of other existing architectures are given in Section 4. Conclusion and final remarks are given in Section 5.
2 Background
2.1 Discrete wavelet transform
2.1.1 Lifting scheme
The lifting scheme is a computationally efficient way of implementing DWT. There are many references to describe liftingbased DWT [1–16]. The transform proceeds first with the lazy wavelet, then alternating dual lifting and primal lifting steps, and concludes with scaling. The inverse transform proceeds first with scaling, then alternating primal lifting and dual lifting steps, and finally the inverse lazy transform. The inverse transform can immediately be derived from the forward transform by running the scheme backwards and flipping the signs as shown in Figure 1.
The lifting scheme implements a filter bank as a multiplication of upper and lower triangular matrices, where each matrix constitutes a lifting step [1, 2]. Let $\tilde{h}\left(z\right)$ and $\tilde{g}\left(z\right)$ be the lowpass and highpass analysis filters, respectively, and let h(z) and g(z) be the lowpass and highpass synthesis filters, respectively. The corresponding polyphase matrices are defined as:
where ${\tilde{h}}_{e}$ contains the even coefficients and ${\tilde{h}}_{o}$ contains the odd coefficients:
It has been shown that if $\left(\tilde{h},\tilde{g}\right)$ is a complementary filter pair, the Euclidean algorithm can be used to decompose $\tilde{P}\left(z\right)$. This $\tilde{P}\left(z\right)$ can always be factored into lifting steps as
The lifting wavelet transform consists of three steps as in Figure 1:

1.
Spliting. The original signal X(n) is split into odd and even sequences (lazy wavelet transform)
$${X}_{e}\left(n\right)=X\left(2n\right)$$(4)$${X}_{o}\left(n\right)=X\left(2n+1\right)$$(5) 
2.
Lifting. It consists of one or more steps m of the form

(a)
Predict/Dual lifting. If X(n) possesses local correlation, then X _{ e }(n) and X _{ o }(n) also have local correlation. Therefore, one subset is used to predict the other subset. In the prediction step, the filtered even array is used to predict the odd array. The new odd array is redefined as the difference between the existing array and the predicted one.
$$D\left(n\right)={X}_{o}\left(n\right){s}_{i}\left({X}_{e}\left(n\right)\right)$$(6) 
(b)
Update/Primal lifting. To eliminate aliasing which appears due to the down sampling of the original signal, the even array is updated using the filtered new odd array.
$$A\left(n\right)={X}_{e}\left(n\right)+{t}_{i}\left(D\left(n\right)\right)$$(7)
Eventually, after m pairs of prediction and update steps, the even samples become the lowfrequency component while the odd samples become the highfrequency component.

3.
Normalization/Scaling. After m lifting steps, scaling coefficients K and 1/K are applied respectively to the even and odd samples in order to obtain the lowpass subband and highpass subband.
For the biorthogonal 9/7 wavelet, four lifting steps and one scaling can be used, where s_{1}(z) = α(1 + z^{1}), s_{2}(z) = γ(1 + z^{1}), t_{1}(z) = β(1 + z), and t_{2}(z) = δ(1 + z). The parameters α, β, γ, and δ are twotap symmetric filter coefficients and K and 1/K are scaling factors.
Lifting steps:
Scaling:
where α = 1.586134342, β = 0.05298011854, γ = 0.8829110762, δ = 0.4435068522, and K = 1.149604398
The original data to be filtered is denoted by X(n), and the outputs are a_{ i } and d_{ i } which are the approximation coefficients and detail coefficients, respectively. We focus on the implementation issue of the liftingbased DWT, which yields higher computational complexity with floating point computation. Hence, we suggest an efficient model for performing the floating point operation to reduce the power by reducing the operating complexities by adopting log conversion [22, 23].
2.2 Set partition in hierarchical trees
SPIHT algorithm is applied to a wavelettransformed image, in which a transformed image can be organized as a spatial orientation tree (SOT) shown in Figure 2a. The arrow in Figure 2a represents the relationship between a parent and its offspring, and each node of the tree corresponds to a coefficient in the transformed image. The SPIHT scans the DWT coefficients in Morton scanning order as shown in Figure 2b. It also assigns the parentchild hierarchy on the scanned coefficients.
For a given set T, SPIHT defines a function of significance, which indicates whether the set T has pixels larger than a given threshold. S_{ n }(T), the significance of set T in the n th bit plane, is defined as in Equation 14.
Note: w(i, j) is the coefficient value for (i, j) position in the wavelet domain. T stands for the set of coefficients and S_{ n }(T) is used for significant state of T at bit plane n.
When S_{ n }(T) is ‘0’ , T is called an insignificant set. Otherwise, T is called a significant set. An insignificant set can be represented as a single bit ‘0’. The significant set is partitioned into subsets, and its significances have to be tested again based on the zerotree hypotheses. The SPIHT encodes a given set T and its descendants (denoted by D(T)) together by checking the significance of T ∪ D(T) and by representing T ∪ D(T) as a single symbol ‘0’ if T ∪ D(T) is insignificant. On the other hand, if T ∪ D(T) is significant, T has to partitioned into subsets and each subset is tested independently.
The spatial orientation trees are illustrated in Figure 2b for a 16 × 16 image and is transformed by three levels of discrete wavelet decomposition. Each level is divided into four subbands. The subband a_{2}a_{2} is divided into four groups of 2 × 2 coefficients. In each group, each of the four coefficients becomes the root of a spatial orientation tree. The square denoted by R in Figure 2a represents the subband a_{3}a_{3} (low pass subband) in Figure 2b, which corresponds to the root. In order to increase the speed of both the encoder and decoder, we adopt a BPS algorithm [33] for our image processing core. BPS algorithm modifies the processing order of the original SPIHT algorithm so that an image is partitioned into multiblocks, and the coefficients trees are local to these blocks. Furthermore, BPS employs pipelining and parallelism, which gives the highest throughput among the existing architectures.
3 Proposed architecture
Figure 3 shows a hierarchical placement of different cores, which all together form the proposed enhanced image processing system. The system incorporates a DWT structure with a BPS, and the overall flow is being monitored using three different control units with proper synchronization signals. The functions and nature of each block are discussed as follows.
3.1 Discrete wavelet transform core
The memory issue and multiplier implementation is the most critical part of the hardware implementation of 2D DWT. In general, the memorybased architecture can be classified into three categories: levelbased, linebased, and blockbased methods [34]. Based on the hardware constraints required, any of above methods could be selected. However, the external memory access would consume the most power and would require more bandwidth. This system uses linebased processing for implementing the 2D DWT architecture. This method uses embedded memory, which acts as a buffer between the row and column processing and thus avoids the heavy dependence on external memory. The inputs for the system are fed from a memory management system as shown in Figure 4. This comprises of a memory block, which frequently updates at regular intervals based on the sync signals from the control blocks. The sync signals are generated for matching the overall delay, which consist of two critical path delays (T_{mul} + 2T_{adder}).
In hardware implementation, the multiplier occupies a large amount of hardware resources. In order to provide a lowpower, highspeed, and areaefficient multiplier for DWT computation, Shi et al. [6] adopted the shiftadd operations to optimize the multiplications since the coefficients of wavelet filters are constant. Zhang et al. [35] used the dedicated 18bit multiplier block present in the FPGA. In spite of the numerous methods that were proposed, the overall latency in the circuit also depended on the multiplier. Hence, it is necessary to modify the multiplier structure in order to achieve minimum area and computation time. Furthermore, the accuracy also depends on floating point lifting coefficients and its arithmetic operations. The above three factors demand modification of computation units in the DWT architecture. Hence, this proposes a new computational unit based on logarithmic principle in order to achieve minimal computation time with optimal area consumption. Moreover, adaptation of the log principle results in good power reduction mainly because of reduced operator and operand strengths. In the next subsection, logbased floating point unit is discussed.
The enhanced architecture for DWT is being proposed in this paper. The main scheme of this architecture allows the computation components to achieve precise outputs. Figure 5 shows the architecture proposed for the DWT structure, in which the modified computation phenomena adopts a logbased floating point unit to endow a good reduction in power and area, while compromising in speed. The B9/7 2D DWT is computed in rowcolumn fashion, i.e., row processing is carried out first, followed by column processing. The image, which is initially stored in the external memory, is read into the image processing core in rowbyrow order. The row processor performs horizontal filtering to the rows, which consists of six computing modules given in Equations 8 to 13 and writes the resultant approximation a_{ 1 } and detail d_{ 1 } coefficients to the local memory. Once a sufficient number of rows have been processed, the column processor starts vertical filtering which consists of the same six computing modules. It fetches the approximation coefficients as the inputs from the local memory and generates four subbands: a_{ 1 }a_{ 1 }, a_{ 1 }d_{ 1 }, d_{ 1 }a_{ 1 }, and d_{ 1 }d_{ 1 }. These four subbands are written back to the external memory in rowwise order. Multiplelevel decomposition is performed on this architecture in noninterleaved fashion, and results between levels are stored in the external memory. For the higher levels, an approximation subband is read from the external memory and four higher level subbands are generated using the same computing modules. This operation continues until the desired levels of wavelet decomposition are finished, as shown in Figure 6. As the realtime image processing core requires high performance, we adopt a highly pipelined, logbased FPU for implementing the lifting steps.
Logbased floating point unit
This paper utilizes IEEE754 standard format for representing floating point numerals, where a real number X is divided into three parts as 1 sign bit (s), 8 exponent bits (E), and 23 mantissa bits (m). This is represented as
This demands three different computation procedures. Hence, the logbased arithmetic model that has to append with B9/7 DWT structure is slightly altered to suit the IEEE754 standard as shown in Figure 7. A bit segregator takes the input fed in standard format and separates it into three individual pieces of data. The sign bit of the input is operated with either an Exor or comparator module based on the module activated by the operator switch. Similarly, the exponent bits are manipulated with either the operator switchactivated bit shifting module or the adder module. The logbased arithmetic unit performs the floating point computations as shown in Figure 8.
The logbased arithmetic unit embedded in the designed FPU utilizes the carry save adder for computing all arithmetic operations. It uses simple log principles, along with operational switches, to select the inputs based on the operation needs. If the adder operator is fed to the switch, the addition computation phenomenon is carried out by merely adding or subtracting the mantissa bits according to the exponent and sign bits. The difference of the two exponents is calculated. If any, perform the mantissa shift and set the larger exponent as the tentative exponent of the result. Shift the mantissa of the smaller exponent to the right by the difference in the exponents. According to the sign bit, perform addition (if equal) or subtraction (if unequal) on the mantissas to get the tentative mantissa as the result. Normalize and round off the mantissa result. If there is an overflow due to rounding, shift right and increment the exponent by 1 bit. Have the highest of the sign bits be the sign bit of the result. Similarly, a multiplication computation procedure is chosen for multiplier input that is fed to the operator switch. The overall data path involved in the multiplier component of this FPU architecture gets simplified. This is a mere computation with only mapping involved. Hence, this simplifies the overall stages involved in multiplications. The mantissas of the input data are mapped to the corresponding logarithmic number in the LUT. This is followed by adding the logarithms. If any overflow shifts the result to the right, then map with antilogarithm LUT to obtain the mantissa of the result. The exponent of the result is obtained by mere addition of the exponent bits, and the sign bit of the result is obtained by the Exoring both sign bits.
As multiplication in this unit is realized with adders using logarithmic number systems (LNS), log coders play an important role in the design. The design of the log and antilog coder has been adopted from the Paul et al. [23] and is designed with slight modifications in interpolator design as shown in Figure 9. This shows a simple shiftbased bit coder network. As the log word generated for the input directly related to the accuracy in the output, different levels of log coders were designed. These log coders are classified into six levels, namely 6, 9, 12, 15, 18, and 21 level based on the width of the log words generated. From this, an optimum log coder is chosen by implementing and testing all levels of log coders for best accuracy and minimum area utilization. As the antilog decoder is also designed with a similar structure, most of the log utilized area can be reconfigured for the antilog decoder design. This in turn achieves a good area reduction and makes the proposed model best suited for embedding this FPU in DWT structure.
3.2 Blockbased parallelpipelined SPIHT
SPIHT is a widely used compression algorithm for wavelettransformed images. To reduce the complexity of SPIHT, an entire picture is decomposed into 4× 4 sets, and the significance of the union of each 4 × 4 set and its descendants is tested. The SPIHT algorithm encodes wavelet coefficients bit plane by bit plane from the most significant bit plane to the least significant bit plane. The algorithm consists of three passes: insignificant set pass (ISP), insignificant pixel pass (IPP), and significant pixel pass (SPP). According to the results of the (n + 1)th bit plane, the n th bit of pixels are categorized and processed by one of the three passes. Insignificant pixels classified by the (n + 1)th bit plane are encoded by IPP, whereas significant pixels are processed by SPP. The main goal of each pass is the generation of an appropriate bit stream according to the wavelet coefficient information. If a set in this pass is classified as a significant set in the n th bit plane, it is decomposed into smaller sets until the smaller sets become insignificant or they correspond to single pixels. If the smaller sets are insignificant, they are handled by ISP. If the smaller sets correspond to single pixels, they are handled by either IPP or SPP, depending on their significance.In the original SPIHT algorithm, three linked lists are maintained for processing the ISP, IPP, and SPP. In each pass, the entries in the linked list are processed in the firstin firstout (FIFO) order. This FIFO order creates a large overhead, which slows down the computation speed of the SPIHT algorithm. To speed up the algorithm, sets and pixels are visited in the Morton order as shown in Figure 2b and processed by the appropriate pass. This modified algorithm, called Morton order SPIHT, is relatively easy to implement in hardware with a slight degradation of the compression efficiency when compared with the original SPIHT.The block diagram of the blockbased parallelpipelined SPIHT architecture is shown in Figure 10. The 8 × 8 block discrete wavelet transformed image is given as the input and sliced into eight planes. The most significant bit (MSB) plane is given to the insignificant pixel pass in the first clock cycle, which finds the significance of each macro and minor block. In the second clock cycle, the insignificant bit planes are given for sorting. The sorting pass updates the insignificant sorting pass. Using the significance bit stream from the insignificant sorting pass, the refining pass (RP) codes the significant micro blocks and gives the coded output. When all the blocks in the 8 × 8 coefficient become significant, then the controller block stops the sorting pass (SP) and, hence, the unnecessary updating of insignificant sorting passes are removed. Thus, pipeline ISP along with parallel RP and SP increases the throughput.
4 Experiment results and analysis
The overall performance of the proposed image processing system is analyzed in this section. As DWT has a wide range of applications in various fields, the proposed system utilizes its efficiency for enhanced image handling and offers good improvement in speed and area consumption. Moreover, the accuracy of the output is also dealt with by modifying the computation parts in the DWT structure. This utilizes logarithmic principle and, hence, yields a good reduction in power. Furthermore, at each level of DWT, precision also depends on decomposition at that stage. Hence, it is necessary to select an optimized level of DWT. During the experimentation of this proposal, the optimized level of DWT is selected based on performance parameters such as peak signaltonoise ratio (PSNR), compression ratio (CR), and wavelet decomposition computation complexity. The architecture is first verified using Matlab for the image parameters and then implemented in hardware to analyze its hardware efficiency.
4.1 Image parameters analysis
The goal is to design an optimized DWT structure with floating point computation units. Hence, an efficient level of DWT has to be chosen for modeling in terms of performance parameters. This is done by various image analyses on the standard images obtained from a public image bank [36]. These are 256 × 256 and 8 bits per pixel (8bpp) bitmap images that can be grouped into three image types. Lena and Cameraman are lowfrequency (LF) images, Woman and Parrots are mediumfrequency (MF) images, and Mandrill and Satellite are highfrequency (HF) images. The frequency type of the image is decided based on the percentage of total image energy (96% to 100% LF, 92% to 96% MF, and ≤92% HF) in the aa subband obtained after one level of decomposition. To evaluate the performance of the proposed architecture, each image was decomposed into different levels with the B9/7 wavelet transform and the transform coefficients were coded using SPIHT algorithm with different compression ratios. The reconstructed image was compared with the original image, and the PSNR values were computed using Equation 16 and are presented in Table 1 and Figure 11.
where 255 is the maximal gray level of the original image and E^{2}ms is the sample mean squared error as follows:
where X(i, j) represents the original N × N image and Y(i, j) represents the reconstructed image.
From Table 1, it is clearly observed that the fivelevel DWT attains a higher PSNR value irrespective of compression ratios than all other levels. The next stage of DWT leads to SPIHT coding, which requires a higher level of decomposition. This also supports the selection of fivelevel decomposition as a generalized case. From Figure 12, it is clearly seen that DWT with fivelevel decompositions attain a good PSNR value; hence, it is designed and implemented in hardware using Verilog HDL and synthesized in Xilinx and Altera FPGAs to verify its devicelevel performance, based on VLSI parameters.
4.2 Numerical accuracy analysis
This work is also concerned with precisions, which is the most important factor of this design. As B9/7 DWT structure utilizes floating point coefficients, accuracy in the result mainly depends on the fractional computational values. Hence, the results obtained with normal integer computation units in DWT suffer from poor accuracy. Moreover, the addition of floating point operation units increases the accuracy. On the other hand, it also increases area and delay overhead. Hence, a logarithmbased FPU is integrated along with the DWT structure to achieve a good reduction in area with a higher improvement in accuracy. As the whole model depends on the log values, the accuracy of the log values is directly related to the accuracy of the result. Furthermore, as std. single precision IEEE754 has 23 mantissa bits, the accuracy also depends on the correctness of the bits. So, in the experimental phase, the analysis of the accuracy is done by two means: output accuracy and bit level accuracy. As accuracy is mostly discussed in its contrary term, the error rate is taken into consideration when discussing accuracy.
The product of a regular multiplication demands twice the bit size of the multiplicands. Hence, in floating point multiplications, the product has to be truncated to fit the std. IEEE754 format. As the product has to be rounded off, there may be some losses in the results. Thus, occurred error during the round off can be predicted from ‘round off error bounds analysis’ done by Paliouras et al. [24]. This study found that the error bounds will be directly depended on the mantissa and not on the operations. This is represented as
where, t is the number of mantissa bits.
Whenever the floating point is rounded off, it results in a steady loss of data. However, in the case of LNS implementation, the obtained product result will only be one bit more than multiplicands. Hence, as rounding off error gets reduced, the error bound only depends on the mantissa bits. The numerical computations are much more complex when involving 47 bit levels, so it is hard to tabulate the actual results. Hence, deviations in the result with respect to the average round off error bounds for the standard test multiplications on the mantissa bits are tabulated. In Tables 2 and 3, the accuracies of both the designed and existing Wallace tree multipliers are shown. Table 2 gives the percentage of output error generated for the set of input test vectors. It shows how far the models deviate from the actual results. As the product result of the Wallace multiplier has to round off from 64 to 32 bits, most of the significant values in the result are suppressed. On the other hand, the results from the added log transformation only offer 33bits, including a carry bit. Though the log conversion and reconversion produces a few errors, the proposed model outrates the existing integerstyled FPUs. Hence, from these detailed comparisons, the proposed structure claims an accuracy improvement of 71% over the existing Wallacebased FPUs.
The generated data presented in Figure 13 shows that the accuracy of the Wallace multiplier is linearly dependent on the bit sizes, whereas the accuracy of the logbased multiplier increases exponentially with the input bit size. Table 3 further displays the bitlevel accuracy of both cases, showing the percentage of corrupted bits in the results including both ‘0 s’ as ‘1 s’ and ‘1 s’ as ‘0 s’. This clearly visualizes the bit performance of both models. Though the bit error form the Wallace seems maintained irrespective of bits, columns 2 and 4 clearly depicts that the log word of size greater than 12 bits have more accuracy than Wallace treebased multiplication.
4.3 Hardware analysis
The logbased floating point computation achieves superior accuracy when compared with normal floating point arithmetic computation. Hence, the computation unit based on the log principle is appended with the biorthogonal DWT structure, which is then implemented in FPGA to analyze its performance in hardware. The analyses were done in two different FPGA environments to show the versatility of the proposed idea as there was no inbuilt IPs used.
4.3.1 Hardware result analysis based on Xilinx device
The hardware performance of integerbased and floating pointbased DWT structure was implemented on Xilinx Virtex6 XC6VLX240 device [37]. From Table 4, it can be found that, when compared with integer based DWT, the floating point DWT is more accurate with minimum latency. The report also shows that the floating pointbased DWT is highly powerefficient, which is achieved by reducing the signal and operational strength using logarithmic principles. Thus, the designed B9/7 DWT structure with logbased FPUs serves as the best competitor for integerbased DWT with accuracy improvement, along with 47% reduced power and 28% improved delay with optimal area consumption. For comparisons, Table 5 lists the results of different SPIHT image compression systems based on FPGA devices [30], [38–40]. From the experimental results, the proposed image processing has lesser area utilization, with maximum clock frequency of 133.33 MHz.
4.3.2 Hardware result analysis based on Altera device
To obtain more realistic results, the proposed image processing core on the Altera® DE2115 board [41] was used. This comprises of inbuilt support for external memories and the video graphics array (VGA) interface intellectual property (IP) to hold and display largesized images of up to 2 MB. The Quartus II 10.2 has been used to map the design to Altera cyclone IV EP4CE115F29C7 FPGA, and the results are reported in Table 6. The core is designed like the specific sync signals that are activated using corresponding signals, which are controlled by external pins in the board. Five different combinations are used to generate the sync signals, as shown in Table 7. The system is first initialized using Rst button. Then, by M_cntrl low switch, the input images are loaded from the flash. A LED is assigned to indicate the completion of sequential transferring of the image to the inbuilt RAM. Once the inbuilt memory is loaded, a fivelevel DWT is activated by enabling the D_active switch and a red lightemitting diode (LED) is assigned to indicate the completion of the process. Once this process has finished, the R/W sync is activated automatically and enables the SPIHT. Similarly, the inverse process is done using ID_active switch, which invokes inverse SPIHT core and IDWT cores. Then, a green LED is used to indicate the operation completion. Finally, the V_contl key is activated to enable the VGA, and the reconstructed image is shown in VGA display unit.
To compare with the reported architecture presented in [4] and [11, 12], the proposed architecture was also tested on Stratix EP1S25B672C7 FPGA. The experimental results which are summarized in Table 8 shows that the number of combinational functions, logic registers, and memories in the proposed architecture is reduced by 24%, 90%, and 12.78% respectively, when compared with the Tian architecture [12]. Furthermore, from Table 6, it is clearly shown that the design is most ominous for all kinds of designs as it does not use the internal IP cores of FPGAs and is designed to acquire optimization. The speed of the system is increased in all successive families as the optimization is increased in each new device. This shows that the system can be adopted in any environment and is best suited for the portable image devices such as mobile phones and digital cameras.
5 Conclusions
This paper has proposed an enhanced image processing system utilizing DWT structure with logbased floating point computation units and SPIHT coders. Hence, efficient decomposition levels of DWT and SPIHT algorithms have to be chosen for the hardware implementation. From the detailed analysis performed with various test images, it is found that the fivelevel decompositions in DWT and blockbased parallelpipelined SPIHT give a good PSNR value irrespective of the compression ratio. This paper adopted a modified arithmetic unit in the DWT structure to achieve good accuracy with minimum latency and power. The modification is stated for the computation units in the DWT structure which are merely integerstyled operation units. As floating point operations are much more complex than integerbased operations, the complexity of the computation hardware also increases. This results in the degradation of the efficiency of DWT operations. Hence, this paper introduced a logbased computation structure to minimize the strength of the operations. Furthermore, it is also found from the results that the accuracy of DWT gets increased as the rounding off errors are fewer with log transformations. The overall structure got 25% improvement in accuracy with the proposed logbased FPUs. In addition, the utilization of LNS in the model provides 47% power reduction in the structure as the overall signal activity and strength is reduced. Hence, the proposed structure features high speed, good accuracy, and lowpower utilization. Thus, the adaptation of this structure in the proposed image processing system results in good hardware optimization. Moreover, the model was tested in different environments to test its robustness and versatility. This was done by implementing the model in different FPGAs. This shows that the model is best suited for portable image analyzing gadgets.
References
 1.
Sweldens W: The lifting scheme: a customdesign construction of biorthogonal wavelets. Appl. Comput. Harmon. Anal. 1996, 3(2):186200. 10.1006/acha.1996.0015
 2.
Daubechies I, Sweldens W: Factoring wavelet transforms into lifting schemes. J. Fourier Anal. Appl. 1998, 4(3):247269. 10.1007/BF02476026
 3.
Acharya T, Chakrabarti C: A survey on liftingbased discrete wavelet transform architectures. J. VLSI Signal Process. 2006, 42: 321339. 10.1007/s1126600641913
 4.
Barua S, Carletta JE, Kotteri KA, Bell AE: An efficient architecture for liftingbased twodimensional discrete wavelet transforms. Integr. VLSI J. 2005, 38(3):341352. 10.1016/j.vlsi.2004.07.010
 5.
Andra K, Chakrabarti C, Acharya T: A VLSI architecture for liftingbased forward and inverse wavelet transform. IEEE Trans. Signal Process 2002, 50(4):966977. 10.1109/78.992147
 6.
Shi G, Liu W, Zhang L, Li F: An efficient folded architecture for liftingbased discrete wavelet transform. IEEE Trans. Circuits Syst.II 2009, 56(4):290294.
 7.
Huang CT, Tseng PC, Chen LG: Flipping structure: an efficient VLSI architecture of lifting based discrete wavelet transform. IEEE Trans. Signal Process. 2004, 52(4):10801088. 10.1109/TSP.2004.823509
 8.
Kim J, Park T: High performance VLSI architecture of 2D discrete wavelet transform with scalable lattice structure. World Acad. Sci. Eng. Technol. 2009, 54: 591596.
 9.
Jiang W, Ortega A: Lifting factorizationbased discrete wavelet transform architecture design. IEEE Trans. Circuits Syst Video Technol. 2001, 11(5):651657. 10.1109/76.920194
 10.
Zhang W, Jiang Z, Gao Z, Liu Y: An efficient VLSI architecture for liftingbased discrete wavelet transform. IEEE Trans. Circuits Syst.–II 2012, 59(3):158162.
 11.
Cheng C, Parhi KK: Highspeed VLSI implement of 2D discrete wavelet transform. IEEE Trans. Signal Process. 2008, 56(1):393403.
 12.
Tian X, Wu L, Tan YH, Tian JW: Efficient multiinput/multioutput VLSI architecture for twodimensional liftingbased discrete wavelet transform. IEEE Trans. Comput. 2011, 60(8):12071211.
 13.
Wu BF, Hu YQ: An efficient VLSI implementation of the discrete wavelet transforms using embedded instruction codes for symmetric filters. IEEE Trans. Circuits Syst. Video Technol. 2003, 13(9):936943. 10.1109/TCSVT.2003.816509
 14.
Zhang C, Wang C, Ahmad MO: A pipeline VLSI architecture for fast computation of the 2D discrete wavelet transform. IEEE Trans. Circuits Syst.–I 2012, 59(8):17751785.
 15.
Lan X, Zheng N, Liu Y: Lowpower and highspeed VLSI architecture for liftingbased forward and inverse wavelet transform. IEEE Trans. Consum. Electron. 2005, 51(2):379386. 10.1109/TCE.2005.1467975
 16.
Lee DU, Kim LW, Villasenor JD: Precisionaware selfquantizing hardware architecture for the discrete wavelet transform. IEEE Trans. Image Process. 2012, 21(2):768777.
 17.
Beauchamp MJ, Hauck S, Underwood KD, Hemmert KS: Architectural modification to enhance the floatingpoint performance of FPGAs. IEEE Trans. Very Large Scale Integer (VLSI) Syst. 2008, 16(2):177187.
 18.
Ho CH, Yu CW, Leong PHW, Luk W, Wilton SJE: Floatingpoint FPGA: architecture and modeling. IEEE Trans. Very Large Scale Integer (VLSI) Syst. 2009, 17(12):17091718.
 19.
Even G, Mueller SM, Seidel PM: A dual precision IEEE floatingpoint multiplier. Integr. VLSI J. 2000, 29(2):167180. 10.1016/S01679260(00)000067
 20.
Yu CW, Smith AM, Luk W, Leong PHW, Wilton SJE: Optimizing floating point units in Hybrid FPGAs. IEEE Trans. Very Large Scale Integer (VLSI) Syst. 2012, 20(7):4565.
 21.
Chong YJ, Parameswaran S: Configurable multimode embedded floatingpoint units for FPGAs. IEEE Trans. Very Large Scale Integer (VLSI) Syst. 2011, 19(11):20332044.
 22.
Anand TH, Vaithiyanathan D, Seshasayanan R: Optimized architecture for floating point computation unit. In Int. conf. on emerging trends in VLSI, embedded sys., nano elec. and tele. sys. Thiruvannamalai, India; 2013:15.
 23.
Paul S, Jayakumar N, Khatri SP: A fast hardware approach for approximate, efficient logarithm and antilogarithm computations. IEEE Trans. Very Large Scale Integer (VLSI) Syst. 2009, 17(2):269277.
 24.
Paliouras V, Karagianni K, Stouraitis T: Error bounds for floatingpoint polynomial interpolators. IEE Electron. Lett. 1999, 35(3):195197. 10.1049/el:19990143
 25.
IEEE standard for floatingpoint arithmetic, IEEE Std7542008. IEEE Inc, New York, NY, USA; 2008:170. doi:10.1109/IEEESTD.2008.4610935
 26.
Vaithiyanathan D, Seshasayanan R: High speed low power DWT structure with log based FPU in FPGAs. In International conference on green computing, communication and conservation of energy (ICGCE 2013). Chennai, India; 2013:308313. doi:10.1109/ICGCE.2013.6823451
 27.
Said A, Pearlman WA: A new fast and efficient image codec based on set partitioning in hierarchical trees. IEEE Trans. Circuits Syst. Video Technol. 1996, 6(3):243250. 10.1109/76.499834
 28.
Wheeler FW, Pearlman WA: SPHIT image compression without lists. IEEE international conference on acoustics, speech, and signal processing (ICASSP), vol. 4 2000, 20472050.
 29.
Corsonello P, Perri S, Staino G, Lanuzza M, Cocorullo G: Low bit rate image compression core for onboard space applications. IEEE Trans. Circuits Syst. Video Technol. 2006, 16(1):114128.
 30.
Jyotheswar J, Mahapatra S: Efficient FPGA implementation of DWT and modified SPIHT for lossless image compression. J. Syst. Arch. 2007, 53: 369378. 10.1016/j.sysarc.2006.11.009
 31.
Cheng CC, Tseng PC, Chen LG: Multimode embedded compression codec engine for poweraware video coding system. IEEE Trans. Circuits Syst Video Technol 2009, 19(2):141150.
 32.
Fry T, Hauck S: SPIHT image compression on FPGAs. IEEE Trans. Circuits Syst. Video Technol. 2005, 15(9):11381147.
 33.
Jin Y, Lee HJ, BlockBased A: Passparallel SPIHT algorithm. IEEE Trans. Circuits Syst. Video Technol. 2012, 22(7):10641075.
 34.
Zervas ND, Anagnostopoulos GP, Spiliotopoulos V, Andrepoulos Y, Goutis CE: Evaluation of design alternatives for the 2Ddiscrete wavelet transform. IEEE Trans. Circuits Syst. Video. Technol. 2001, 11: 12461262. 10.1109/76.974679
 35.
Zhang C, Long Y, Kurdahi F: A hierarchical pipelining architecture and FPGA implementation for liftingbased 2D DWT. J. RealTime Image Proc. 2007, 2: 281291. 10.1007/s1155400700576
 36.
The USCSIPI image database Univ. Southern California, signal and Inage processing inst. 2011. Available: http://sipi.usc.edu/database/
 37.
Virtex6 FPGA data sheet. Xilinx, Inc, San Jose, CA, USA; 2012. . Accessed 18 Feb 2013 http://www.xilinx.com/support/documentation/data_sheets/ds150.pdf
 38.
Corsonello P, Perri S, Zicari P, Cocorullob G: Microprocessorbased FPGA implementation of SPIHT image compression system. Microprocessor and Microsystems 2005, 29(6):299305. 10.1016/j.micpro.2004.08.013
 39.
Chew LW, Chia WC, Ang LM, Seng KP: Very lowmemory wavelet compression architecture using stripbased processing for implementation in wireless sensor networks. EURASIP J. Embed. Syst 2009.
 40.
Liu K, Belyaev E, Guo J: VLSI architecture of arithmetic coder used in SPIHT. IEEE Trans. Very Large Scale Integer (VLSI) Syst. 2012, 20(4):697710.
 41.
DE2115 FPGA board data sheet. Altera Corporation, San Jose, California, USA; 2010. . Accessed 15 March 2013 ftp://ftp.altera.com/up/pub/Altera_Material/12.1/Boards/DE2115/DE2_115_User_Manual.pdf
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Dhandapani, V., Ramachandran, S. Poweroptimized logbased image processing system. J Image Video Proc 2014, 37 (2014). https://doi.org/10.1186/16875281201437
Received:
Accepted:
Published:
Keywords
 Discrete wavelet transform (DWT)
 Lifting scheme
 Log principles
 Floating point unit (FPUs)
 Set partition in hierarchical trees (SPIHT)
 Image coding
 Fieldprogrammable gate array (FPGA) implementation
 Realtime processing