Skip to main content

Power-optimized log-based image processing system

Abstract

The continuous development of devices such as mobile phones and digital cameras has led to a higher amount of research being dedicated to the image processing field. Today's image-acquiring tools require battery-operated power, and hence, power optimization becomes a major factor to be considered in the hardware implementation of image systems. This paper proposes an image processing system which utilizes set partitioning in hierarchical trees (SPIHT)-integrated discrete wavelet transform (DWT) structure for image processing. The overall advantage of this proposal is achieved by modifying the arithmetic units in the DWT structure. Utilizing a logarithm-based floating point unit (FPU) in the DWT computation structures, the logarithmic number system (LNS) adaptation in the arithmetic unit results in overall accuracy enhancement with reduced area and power consumption. To ensure the versatility of the proposal and for further evaluating the performance and correctness of the structure, the model is implemented using Xilinx and Altera field-programmable gate array (FPGA) devices. The analyses obtained from the implementation show that the structure incorporated with the log-based FPU is 25% more accurate with 47% reduced power consumption than the integer-styled FPU incorporated DWTs, along with enhanced speed and optimal area utilization.

1 Introduction

Discrete wavelet transform (DWT) is increasingly being used for image coding. In particular, biorthogonal symmetric wavelets manifested remarkable abilities in still image compression. Hence, this paper proposes an image processing system by focusing on the biorthogonal 9/7 DWT structure. DWT has traditionally been implemented using the convolution method. This implementation demands a large number of computations and storage features that are not desirable for high-speed or low-power applications. Swelden [1] proposed a new mathematical formulation for wavelet transformation based on spatial construction of the wavelets, and a very versatile scheme for its factorization has been suggested in [2]. This approach is called the lifting-based wavelet transform. The main feature of the lifting-based DWT scheme is to break up high-pass and low-pass filters into a sequence of upper and lower triangular matrices and convert the filter implementation into banded matrix multiplications. This scheme has several advantages when compared to the convolution techniques, which includes ‘in-place’ computation of the DWT, symmetric forward, and inverse transform. Therefore, the DWT implemented using the lifting scheme in the JPEG 2000 standard are the biorthogonal lossless 5/3 integer and the lossy 9/7 floating point filter banks. Numerous architectures have been proposed in order to provide low-power, high-speed, and area-efficient hardware implementation for DWT computation [3–16]. Shi et al. [6] proposed efficient folded architecture (EFA) with low hardware complexity. The flipping structure is another important DWT architecture that was proposed by Huang et al. [7]. A high-speed, reduced-area two-dimensional (2-D) DWT architecture was proposed by Zhang et al. [10]. While most of these architectures are related to research involved in the optimization of critical paths, only some of them, such as Lee et al. [16], deal not only with the internal data path but also with the coefficient precision optimization.

This paper focuses on lossy biorthogonal 9/7 lifting-based DWT. This yields higher computational complexity with floating point computations. The implementation of this structure in hardware requires an additional complex hardware to handle the floating point computations. This demands a separate unit for its processing, which leads to the design of the floating point unit (FPU). By exploring the existing FPUs, the phenomenon of arithmetic computations are still the same as ordinary arithmetic logic unit (ALU) operations, acting like an additional prop up for normal ALUs. An island-style with embedded FPU [17] is proposed by Beauchamp et al., while a coarse-grained FPU was suggested by Ho et al. [18]. Even et al. [19] suggests a multiplier for performing on either single-precision or double-precision floating point numbers. An optimized FPU in a hybrid FPGA was suggested by Yu et al. [20] and a configurable multimode FPU for FPGAs by Chong and Parameswaran [21]. Performance improvisation and optimization of these suggested models are studied and employed in each successive development time frame. However, while these models fine tune the FPU in terms of area, there were no suggestions for power reduction or accuracy enhancements. Anand et al. [22] proposed a log lookup table (LUT)-based FPU, which utilizes a logarithmic principle to achieve good accuracy with reduced power consumption. However, this model has some serious drawbacks, which include increased delay and additional memory for the log LUT handling. The above factors affect the performance in terms of area and speed. Hence, this proposed scheme suggests an efficient model for performing floating point operations to reduce power consumption by reducing the operation complexities using log conversion [23].This reduces the overall computation burden, as the process is simply a numerical transformation to the logarithmic domain. Thus, a reduction in power consumption and increased accuracy is attained with optimal area usage [24]. The mere mapping of floating point numerals is not possible, and hence, a standardized form is adopted by using IEEE 754 single-precision floating point standard [25]. An optimized DWT architecture with log-based FPU is proposed, and a preliminary version of this work was presented in [26]. This paper revises the external memory access, and a more accurate and detailed error analysis and the simulation results are given.

After the lifting-based DWT was introduced, several coding algorithms were proposed to code the wavelet coefficients into an efficient result, while taking storage space and redundancy into consideration. These algorithms are embedded zerotree wavelet (EZW), embedded block coding with optimized truncation (EBCOT), and set partitioning in hierarchical trees (SPIHT). Among these, the SPIHT algorithm is most preferable because of its low-computational complexity and better image compression performance. The SPIHT coding, proposed by Said and Pearlman in 1996 [27], does not required arithmetic coding and provides a cheaper and faster hardware solution. It was modified by Wheeler and Pearlman [28] by making a no list SPIHT (NLS) to reduce memory usage. Later, Corsonello et al.[29] proposed a low-cost implementation of NLS in order to improve the coding speed. The work in [30] modified the scanning process and utilized fixed memory allocation for the data list to reduce the hardware complexity. In order to achieve high throughput, Cheng et al. [31] proposed a modified SPIHT that processes a 4 × 4 bit plane in 1 cycle. Fry and Hauck [32] improvised this model with a bit plane parallel SPIHT encoder architecture to further increase the throughput. By the year 2013, Jin and Lee [33] proposed a block-based pass-parallel SPIHT (BPS) algorithm, which employs pipelining and parallelism. This scheme has the highest throughput among the existing architectures. Hence, we espouse the BPS in our image processing core.

This proposal introduces an enhanced image processing system, which utilizes a low-power DWT structure along with a log-based FPU and BPS coder. The optimized decomposition level of DWT is selected based on performance parameters such as peak signal-to-noise ratio, compression ratio, and computational complexity. To examine the specific hardware performance and trade-offs associated with the solutions presented here, the architecture is first verified in Matlab for the image parameters. In addition to this, the hardware implementation is carried out using Verilog hardware description language (HDL) and synthesized using Xilinx and Altera FPGA families to verify its device level performance based on VLSI parameters.

The rest of the paper flow is given in brief as follows. Section 2 gives the background supporting the basic understanding of lifting-based discrete wavelet transform and SPIHT coding techniques. Section 3 pursues with the hardware implementation of forward 2-D DWT with modified computation unit adopting log-based FPU and SPIHT coders. Detailed experimental setup for the proposed real-time image processing system and the performance of the proposed architecture is assessed and compared with that of other existing architectures are given in Section 4. Conclusion and final remarks are given in Section 5.

2 Background

2.1 Discrete wavelet transform

2.1.1 Lifting scheme

The lifting scheme is a computationally efficient way of implementing DWT. There are many references to describe lifting-based DWT [1–16]. The transform proceeds first with the lazy wavelet, then alternating dual lifting and primal lifting steps, and concludes with scaling. The inverse transform proceeds first with scaling, then alternating primal lifting and dual lifting steps, and finally the inverse lazy transform. The inverse transform can immediately be derived from the forward transform by running the scheme backwards and flipping the signs as shown in Figure 1.

Figure 1
figure 1

Wavelet transforms using lifting scheme.

The lifting scheme implements a filter bank as a multiplication of upper and lower triangular matrices, where each matrix constitutes a lifting step [1, 2]. Let h ˜ z and g ˜ z be the low-pass and high-pass analysis filters, respectively, and let h(z) and g(z) be the low-pass and high-pass synthesis filters, respectively. The corresponding polyphase matrices are defined as:

P ˜ z = h ˜ e z h ˜ o z g ˜ e z g ˜ o z andP z = h e z g e z h o z g o z
(1)

where h ˜ e contains the even coefficients and h ˜ o contains the odd coefficients:

h ˜ e z = ∑ k h 2 k z - k and h ˜ o z = ∑ k h 2 k + 1 z - k
(2)

It has been shown that if h ˜ , g ˜ is a complementary filter pair, the Euclidean algorithm can be used to decompose P ˜ z . This P ˜ z can always be factored into lifting steps as

P ˜ z = Π i = 1 m 1 s i z 0 1 1 0 t i z 1 K 0 0 1 / K
(3)

The lifting wavelet transform consists of three steps as in Figure 1:

  1. 1.

    Spliting. The original signal X(n) is split into odd and even sequences (lazy wavelet transform)

    X e n =X 2 n
    (4)
    X o n =X 2 n + 1
    (5)
  2. 2.

    Lifting. It consists of one or more steps m of the form

  3. (a)

    Predict/Dual lifting. If X(n) possesses local correlation, then X e (n) and X o (n) also have local correlation. Therefore, one subset is used to predict the other subset. In the prediction step, the filtered even array is used to predict the odd array. The new odd array is redefined as the difference between the existing array and the predicted one.

    D n = X o n - s i X e n
    (6)
  4. (b)

    Update/Primal lifting. To eliminate aliasing which appears due to the down sampling of the original signal, the even array is updated using the filtered new odd array.

    A n = X e n + t i D n
    (7)

Eventually, after m pairs of prediction and update steps, the even samples become the low-frequency component while the odd samples become the high-frequency component.

  1. 3.

    Normalization/Scaling. After m lifting steps, scaling coefficients K and 1/K are applied respectively to the even and odd samples in order to obtain the low-pass subband and high-pass subband.

For the biorthogonal 9/7 wavelet, four lifting steps and one scaling can be used, where s1(z) = α(1 + z-1), s2(z) = γ(1 + z-1), t1(z) = β(1 + z), and t2(z) = δ(1 + z). The parameters α, β, γ, and δ are two-tap symmetric filter coefficients and K and 1/K are scaling factors.

Lifting steps:

PredictP1: d i 1 n = X o n +α X e n + X e n + 1
(8)
UpdateU1: a i 1 n = X e n +β d i 1 n - 1 + d i 1 n
(9)
PredictP2: d i 2 n = d i 1 n +γ a i 1 n + a i 1 n + 1
(10)
UpdateU2: a i 2 n = a i 1 n +δ d i 2 n - 1 + d i 2 n
(11)

Scaling:

a i n = K * a i 2 n
(12)
d i n =1/ K * d i 2 n
(13)

where α = -1.586134342, β = -0.05298011854, γ = 0.8829110762, δ = 0.4435068522, and K = 1.149604398

The original data to be filtered is denoted by X(n), and the outputs are a i and d i which are the approximation coefficients and detail coefficients, respectively. We focus on the implementation issue of the lifting-based DWT, which yields higher computational complexity with floating point computation. Hence, we suggest an efficient model for performing the floating point operation to reduce the power by reducing the operating complexities by adopting log conversion [22, 23].

2.2 Set partition in hierarchical trees

SPIHT algorithm is applied to a wavelet-transformed image, in which a transformed image can be organized as a spatial orientation tree (SOT) shown in Figure 2a. The arrow in Figure 2a represents the relationship between a parent and its offspring, and each node of the tree corresponds to a coefficient in the transformed image. The SPIHT scans the DWT coefficients in Morton scanning order as shown in Figure 2b. It also assigns the parent-child hierarchy on the scanned coefficients.

Figure 2
figure 2

Spatial orientation trees in SPIHT (a) and Morton scanning order of a 16 × 16 three-level wavelet-transformed image (b).

For a given set T, SPIHT defines a function of significance, which indicates whether the set T has pixels larger than a given threshold. S n (T), the significance of set T in the n th bit plane, is defined as in Equation 14.

S n T = 1 , max w i , j ∈ T w i , j ≥ 2 n 0 , otherwise
(14)

Note: w(i, j) is the coefficient value for (i, j) position in the wavelet domain. T stands for the set of coefficients and S n (T) is used for significant state of T at bit plane n.

When S n (T) is ‘0’ , T is called an insignificant set. Otherwise, T is called a significant set. An insignificant set can be represented as a single bit ‘0’. The significant set is partitioned into subsets, and its significances have to be tested again based on the zerotree hypotheses. The SPIHT encodes a given set T and its descendants (denoted by D(T)) together by checking the significance of T ∪ D(T) and by representing T ∪ D(T) as a single symbol ‘0’ if T ∪ D(T) is insignificant. On the other hand, if T ∪ D(T) is significant, T has to partitioned into subsets and each subset is tested independently.

The spatial orientation trees are illustrated in Figure 2b for a 16 × 16 image and is transformed by three levels of discrete wavelet decomposition. Each level is divided into four subbands. The subband a2a2 is divided into four groups of 2 × 2 coefficients. In each group, each of the four coefficients becomes the root of a spatial orientation tree. The square denoted by R in Figure 2a represents the subband a3a3 (low pass subband) in Figure 2b, which corresponds to the root. In order to increase the speed of both the encoder and decoder, we adopt a BPS algorithm [33] for our image processing core. BPS algorithm modifies the processing order of the original SPIHT algorithm so that an image is partitioned into multiblocks, and the coefficients trees are local to these blocks. Furthermore, BPS employs pipelining and parallelism, which gives the highest throughput among the existing architectures.

3 Proposed architecture

Figure 3 shows a hierarchical placement of different cores, which all together form the proposed enhanced image processing system. The system incorporates a DWT structure with a BPS, and the overall flow is being monitored using three different control units with proper synchronization signals. The functions and nature of each block are discussed as follows.

Figure 3
figure 3

Enhanced image processing system.

3.1 Discrete wavelet transform core

The memory issue and multiplier implementation is the most critical part of the hardware implementation of 2-D DWT. In general, the memory-based architecture can be classified into three categories: level-based, line-based, and block-based methods [34]. Based on the hardware constraints required, any of above methods could be selected. However, the external memory access would consume the most power and would require more bandwidth. This system uses line-based processing for implementing the 2-D DWT architecture. This method uses embedded memory, which acts as a buffer between the row and column processing and thus avoids the heavy dependence on external memory. The inputs for the system are fed from a memory management system as shown in Figure 4. This comprises of a memory block, which frequently updates at regular intervals based on the sync signals from the control blocks. The sync signals are generated for matching the overall delay, which consist of two critical path delays (Tmul + 2Tadder).

Figure 4
figure 4

Memory management system.

In hardware implementation, the multiplier occupies a large amount of hardware resources. In order to provide a low-power, high-speed, and area-efficient multiplier for DWT computation, Shi et al. [6] adopted the shift-add operations to optimize the multiplications since the coefficients of wavelet filters are constant. Zhang et al. [35] used the dedicated 18-bit multiplier block present in the FPGA. In spite of the numerous methods that were proposed, the overall latency in the circuit also depended on the multiplier. Hence, it is necessary to modify the multiplier structure in order to achieve minimum area and computation time. Furthermore, the accuracy also depends on floating point lifting coefficients and its arithmetic operations. The above three factors demand modification of computation units in the DWT architecture. Hence, this proposes a new computational unit based on logarithmic principle in order to achieve minimal computation time with optimal area consumption. Moreover, adaptation of the log principle results in good power reduction mainly because of reduced operator and operand strengths. In the next subsection, log-based floating point unit is discussed.

The enhanced architecture for DWT is being proposed in this paper. The main scheme of this architecture allows the computation components to achieve precise outputs. Figure 5 shows the architecture proposed for the DWT structure, in which the modified computation phenomena adopts a log-based floating point unit to endow a good reduction in power and area, while compromising in speed. The B9/7 2-D DWT is computed in row-column fashion, i.e., row processing is carried out first, followed by column processing. The image, which is initially stored in the external memory, is read into the image processing core in row-by-row order. The row processor performs horizontal filtering to the rows, which consists of six computing modules given in Equations 8 to 13 and writes the resultant approximation a 1 and detail d 1 coefficients to the local memory. Once a sufficient number of rows have been processed, the column processor starts vertical filtering which consists of the same six computing modules. It fetches the approximation coefficients as the inputs from the local memory and generates four subbands: a 1 a 1 , a 1 d 1 , d 1 a 1 , and d 1 d 1 . These four subbands are written back to the external memory in row-wise order. Multiple-level decomposition is performed on this architecture in non-interleaved fashion, and results between levels are stored in the external memory. For the higher levels, an approximation subband is read from the external memory and four higher level subbands are generated using the same computing modules. This operation continues until the desired levels of wavelet decomposition are finished, as shown in Figure 6. As the real-time image processing core requires high performance, we adopt a highly pipelined, log-based FPU for implementing the lifting steps.

Figure 5
figure 5

Direct mapped structure for the lifting based B9/7 DWT.

Figure 6
figure 6

Direct mapped 2-D DWT structure.

Log-based floating point unit

This paper utilizes IEEE754 standard format for representing floating point numerals, where a real number X is divided into three parts as 1 sign bit (s), 8 exponent bits (E), and 23 mantissa bits (m). This is represented as

X= - 1 s ×1.m× 2 E - 127 ,0≤m≤1
(15)

This demands three different computation procedures. Hence, the log-based arithmetic model that has to append with B9/7 DWT structure is slightly altered to suit the IEEE754 standard as shown in Figure 7. A bit segregator takes the input fed in standard format and separates it into three individual pieces of data. The sign bit of the input is operated with either an Ex-or or comparator module based on the module activated by the operator switch. Similarly, the exponent bits are manipulated with either the operator switch-activated bit shifting module or the adder module. The log-based arithmetic unit performs the floating point computations as shown in Figure 8.

Figure 7
figure 7

Log-based IEEE 754 compatible floating point unit.

Figure 8
figure 8

Log-based arithmetic circuit model.

The log-based arithmetic unit embedded in the designed FPU utilizes the carry save adder for computing all arithmetic operations. It uses simple log principles, along with operational switches, to select the inputs based on the operation needs. If the adder operator is fed to the switch, the addition computation phenomenon is carried out by merely adding or subtracting the mantissa bits according to the exponent and sign bits. The difference of the two exponents is calculated. If any, perform the mantissa shift and set the larger exponent as the tentative exponent of the result. Shift the mantissa of the smaller exponent to the right by the difference in the exponents. According to the sign bit, perform addition (if equal) or subtraction (if unequal) on the mantissas to get the tentative mantissa as the result. Normalize and round off the mantissa result. If there is an overflow due to rounding, shift right and increment the exponent by 1 bit. Have the highest of the sign bits be the sign bit of the result. Similarly, a multiplication computation procedure is chosen for multiplier input that is fed to the operator switch. The overall data path involved in the multiplier component of this FPU architecture gets simplified. This is a mere computation with only mapping involved. Hence, this simplifies the overall stages involved in multiplications. The mantissas of the input data are mapped to the corresponding logarithmic number in the LUT. This is followed by adding the logarithms. If any overflow shifts the result to the right, then map with antilogarithm LUT to obtain the mantissa of the result. The exponent of the result is obtained by mere addition of the exponent bits, and the sign bit of the result is obtained by the Ex-or-ing both sign bits.

As multiplication in this unit is realized with adders using logarithmic number systems (LNS), log coders play an important role in the design. The design of the log and antilog coder has been adopted from the Paul et al. [23] and is designed with slight modifications in interpolator design as shown in Figure 9. This shows a simple shift-based bit coder network. As the log word generated for the input directly related to the accuracy in the output, different levels of log coders were designed. These log coders are classified into six levels, namely 6, 9, 12, 15, 18, and 21 level based on the width of the log words generated. From this, an optimum log coder is chosen by implementing and testing all levels of log coders for best accuracy and minimum area utilization. As the antilog decoder is also designed with a similar structure, most of the log utilized area can be reconfigured for the antilog decoder design. This in turn achieves a good area reduction and makes the proposed model best suited for embedding this FPU in DWT structure.

Figure 9
figure 9

Log cum antilog coder.

3.2 Block-based parallel-pipelined SPIHT

SPIHT is a widely used compression algorithm for wavelet-transformed images. To reduce the complexity of SPIHT, an entire picture is decomposed into 4× 4 sets, and the significance of the union of each 4 × 4 set and its descendants is tested. The SPIHT algorithm encodes wavelet coefficients bit plane by bit plane from the most significant bit plane to the least significant bit plane. The algorithm consists of three passes: insignificant set pass (ISP), insignificant pixel pass (IPP), and significant pixel pass (SPP). According to the results of the (n + 1)th bit plane, the n th bit of pixels are categorized and processed by one of the three passes. Insignificant pixels classified by the (n + 1)th bit plane are encoded by IPP, whereas significant pixels are processed by SPP. The main goal of each pass is the generation of an appropriate bit stream according to the wavelet coefficient information. If a set in this pass is classified as a significant set in the n th bit plane, it is decomposed into smaller sets until the smaller sets become insignificant or they correspond to single pixels. If the smaller sets are insignificant, they are handled by ISP. If the smaller sets correspond to single pixels, they are handled by either IPP or SPP, depending on their significance.In the original SPIHT algorithm, three linked lists are maintained for processing the ISP, IPP, and SPP. In each pass, the entries in the linked list are processed in the first-in first-out (FIFO) order. This FIFO order creates a large overhead, which slows down the computation speed of the SPIHT algorithm. To speed up the algorithm, sets and pixels are visited in the Morton order as shown in Figure 2b and processed by the appropriate pass. This modified algorithm, called Morton order SPIHT, is relatively easy to implement in hardware with a slight degradation of the compression efficiency when compared with the original SPIHT.The block diagram of the block-based parallel-pipelined SPIHT architecture is shown in Figure 10. The 8 × 8 block discrete wavelet transformed image is given as the input and sliced into eight planes. The most significant bit (MSB) plane is given to the insignificant pixel pass in the first clock cycle, which finds the significance of each macro and minor block. In the second clock cycle, the insignificant bit planes are given for sorting. The sorting pass updates the insignificant sorting pass. Using the significance bit stream from the insignificant sorting pass, the refining pass (RP) codes the significant micro blocks and gives the coded output. When all the blocks in the 8 × 8 coefficient become significant, then the controller block stops the sorting pass (SP) and, hence, the unnecessary updating of insignificant sorting passes are removed. Thus, pipeline ISP along with parallel RP and SP increases the throughput.

Figure 10
figure 10

Block-based parallel-pipelined SPIHT.

4 Experiment results and analysis

The overall performance of the proposed image processing system is analyzed in this section. As DWT has a wide range of applications in various fields, the proposed system utilizes its efficiency for enhanced image handling and offers good improvement in speed and area consumption. Moreover, the accuracy of the output is also dealt with by modifying the computation parts in the DWT structure. This utilizes logarithmic principle and, hence, yields a good reduction in power. Furthermore, at each level of DWT, precision also depends on decomposition at that stage. Hence, it is necessary to select an optimized level of DWT. During the experimentation of this proposal, the optimized level of DWT is selected based on performance parameters such as peak signal-to-noise ratio (PSNR), compression ratio (CR), and wavelet decomposition computation complexity. The architecture is first verified using Matlab for the image parameters and then implemented in hardware to analyze its hardware efficiency.

4.1 Image parameters analysis

The goal is to design an optimized DWT structure with floating point computation units. Hence, an efficient level of DWT has to be chosen for modeling in terms of performance parameters. This is done by various image analyses on the standard images obtained from a public image bank [36]. These are 256 × 256 and 8 bits per pixel (8bpp) bitmap images that can be grouped into three image types. Lena and Cameraman are low-frequency (LF) images, Woman and Parrots are medium-frequency (MF) images, and Mandrill and Satellite are high-frequency (HF) images. The frequency type of the image is decided based on the percentage of total image energy (96% to 100% LF, 92% to 96% MF, and ≤92% HF) in the aa subband obtained after one level of decomposition. To evaluate the performance of the proposed architecture, each image was decomposed into different levels with the B9/7 wavelet transform and the transform coefficients were coded using SPIHT algorithm with different compression ratios. The reconstructed image was compared with the original image, and the PSNR values were computed using Equation 16 and are presented in Table 1 and Figure 11.

Table 1 PSNR values for different decomposition levels
Figure 11
figure 11

Reconstructed images obtained from different bits per pixel for (a) Lena, (b) Woman, and (c) Mandrill.

PSNR=10 log 10 255 2 E 2 ms dB
(16)

where 255 is the maximal gray level of the original image and E2ms is the sample mean squared error as follows:

E 2 ms= 1 N 2 ∑ i = 0 N ∑ j = 0 N X i , j - Y i , j 2
(17)

where X(i, j) represents the original N × N image and Y(i, j) represents the reconstructed image.

From Table 1, it is clearly observed that the five-level DWT attains a higher PSNR value irrespective of compression ratios than all other levels. The next stage of DWT leads to SPIHT coding, which requires a higher level of decomposition. This also supports the selection of five-level decomposition as a generalized case. From Figure 12, it is clearly seen that DWT with five-level decompositions attain a good PSNR value; hence, it is designed and implemented in hardware using Verilog HDL and synthesized in Xilinx and Altera FPGAs to verify its device-level performance, based on VLSI parameters.

Figure 12
figure 12

Bits per pixel vs. PSNR of Lena and Woman image with different levels of decomposition.

4.2 Numerical accuracy analysis

This work is also concerned with precisions, which is the most important factor of this design. As B9/7 DWT structure utilizes floating point coefficients, accuracy in the result mainly depends on the fractional computational values. Hence, the results obtained with normal integer computation units in DWT suffer from poor accuracy. Moreover, the addition of floating point operation units increases the accuracy. On the other hand, it also increases area and delay overhead. Hence, a logarithm-based FPU is integrated along with the DWT structure to achieve a good reduction in area with a higher improvement in accuracy. As the whole model depends on the log values, the accuracy of the log values is directly related to the accuracy of the result. Furthermore, as std. single precision IEEE754 has 23 mantissa bits, the accuracy also depends on the correctness of the bits. So, in the experimental phase, the analysis of the accuracy is done by two means: output accuracy and bit level accuracy. As accuracy is mostly discussed in its contrary term, the error rate is taken into consideration when discussing accuracy.

The product of a regular multiplication demands twice the bit size of the multiplicands. Hence, in floating point multiplications, the product has to be truncated to fit the std. IEEE754 format. As the product has to be rounded off, there may be some losses in the results. Thus, occurred error during the round off can be predicted from ‘round off error bounds analysis’ done by Paliouras et al. [24]. This study found that the error bounds will be directly depended on the mantissa and not on the operations. This is represented as

Error,∈= 2 - t - 1
(18)

where, t is the number of mantissa bits.

Whenever the floating point is rounded off, it results in a steady loss of data. However, in the case of LNS implementation, the obtained product result will only be one bit more than multiplicands. Hence, as rounding off error gets reduced, the error bound only depends on the mantissa bits. The numerical computations are much more complex when involving 47 bit levels, so it is hard to tabulate the actual results. Hence, deviations in the result with respect to the average round off error bounds for the standard test multiplications on the mantissa bits are tabulated. In Tables 2 and 3, the accuracies of both the designed and existing Wallace tree multipliers are shown. Table 2 gives the percentage of output error generated for the set of input test vectors. It shows how far the models deviate from the actual results. As the product result of the Wallace multiplier has to round off from 64 to 32 bits, most of the significant values in the result are suppressed. On the other hand, the results from the added log transformation only offer 33bits, including a carry bit. Though the log conversion and reconversion produces a few errors, the proposed model outrates the existing integer-styled FPUs. Hence, from these detailed comparisons, the proposed structure claims an accuracy improvement of 71% over the existing Wallace-based FPUs.

Table 2 Output accuracy percentage computation
Table 3 Output bit error rate computation

The generated data presented in Figure 13 shows that the accuracy of the Wallace multiplier is linearly dependent on the bit sizes, whereas the accuracy of the log-based multiplier increases exponentially with the input bit size. Table 3 further displays the bit-level accuracy of both cases, showing the percentage of corrupted bits in the results including both ‘0 s’ as ‘1 s’ and ‘1 s’ as ‘0 s’. This clearly visualizes the bit performance of both models. Though the bit error form the Wallace seems maintained irrespective of bits, columns 2 and 4 clearly depicts that the log word of size greater than 12 bits have more accuracy than Wallace tree-based multiplication.

Figure 13
figure 13

Output accuracy percentages for Wallace and log-based multiplier.

4.3 Hardware analysis

The log-based floating point computation achieves superior accuracy when compared with normal floating point arithmetic computation. Hence, the computation unit based on the log principle is appended with the biorthogonal DWT structure, which is then implemented in FPGA to analyze its performance in hardware. The analyses were done in two different FPGA environments to show the versatility of the proposed idea as there was no inbuilt IPs used.

4.3.1 Hardware result analysis based on Xilinx device

The hardware performance of integer-based and floating point-based DWT structure was implemented on Xilinx Virtex6 XC6VLX240 device [37]. From Table 4, it can be found that, when compared with integer based DWT, the floating point DWT is more accurate with minimum latency. The report also shows that the floating point-based DWT is highly power-efficient, which is achieved by reducing the signal and operational strength using logarithmic principles. Thus, the designed B9/7 DWT structure with log-based FPUs serves as the best competitor for integer-based DWT with accuracy improvement, along with 47% reduced power and 28% improved delay with optimal area consumption. For comparisons, Table 5 lists the results of different SPIHT image compression systems based on FPGA devices [30], [38–40]. From the experimental results, the proposed image processing has lesser area utilization, with maximum clock frequency of 133.33 MHz.

Table 4 Hardware utilization comparison
Table 5 Performance comparisons of different SPIHT coders on FPGA devices

4.3.2 Hardware result analysis based on Altera device

To obtain more realistic results, the proposed image processing core on the Altera® DE2-115 board [41] was used. This comprises of inbuilt support for external memories and the video graphics array (VGA) interface intellectual property (IP) to hold and display large-sized images of up to 2 MB. The Quartus II 10.2 has been used to map the design to Altera cyclone IV EP4CE115F29C7 FPGA, and the results are reported in Table 6. The core is designed like the specific sync signals that are activated using corresponding signals, which are controlled by external pins in the board. Five different combinations are used to generate the sync signals, as shown in Table 7. The system is first initialized using Rst button. Then, by M_cntrl low switch, the input images are loaded from the flash. A LED is assigned to indicate the completion of sequential transferring of the image to the inbuilt RAM. Once the inbuilt memory is loaded, a five-level DWT is activated by enabling the D_active switch and a red light-emitting diode (LED) is assigned to indicate the completion of the process. Once this process has finished, the R/W sync is activated automatically and enables the SPIHT. Similarly, the inverse process is done using ID_active switch, which invokes inverse SPIHT core and IDWT cores. Then, a green LED is used to indicate the operation completion. Finally, the V_contl key is activated to enable the VGA, and the reconstructed image is shown in VGA display unit.

Table 6 Altera level analysis
Table 7 Sync signal specification

To compare with the reported architecture presented in [4] and [11, 12], the proposed architecture was also tested on Stratix EP1S25B672C7 FPGA. The experimental results which are summarized in Table 8 shows that the number of combinational functions, logic registers, and memories in the proposed architecture is reduced by 24%, 90%, and 12.78% respectively, when compared with the Tian architecture [12]. Furthermore, from Table 6, it is clearly shown that the design is most ominous for all kinds of designs as it does not use the internal IP cores of FPGAs and is designed to acquire optimization. The speed of the system is increased in all successive families as the optimization is increased in each new device. This shows that the system can be adopted in any environment and is best suited for the portable image devices such as mobile phones and digital cameras.

Table 8 Comparison of B9/7 DWT implementation on FPGA

5 Conclusions

This paper has proposed an enhanced image processing system utilizing DWT structure with log-based floating point computation units and SPIHT coders. Hence, efficient decomposition levels of DWT and SPIHT algorithms have to be chosen for the hardware implementation. From the detailed analysis performed with various test images, it is found that the five-level decompositions in DWT and block-based parallel-pipelined SPIHT give a good PSNR value irrespective of the compression ratio. This paper adopted a modified arithmetic unit in the DWT structure to achieve good accuracy with minimum latency and power. The modification is stated for the computation units in the DWT structure which are merely integer-styled operation units. As floating point operations are much more complex than integer-based operations, the complexity of the computation hardware also increases. This results in the degradation of the efficiency of DWT operations. Hence, this paper introduced a log-based computation structure to minimize the strength of the operations. Furthermore, it is also found from the results that the accuracy of DWT gets increased as the rounding off errors are fewer with log transformations. The overall structure got 25% improvement in accuracy with the proposed log-based FPUs. In addition, the utilization of LNS in the model provides 47% power reduction in the structure as the overall signal activity and strength is reduced. Hence, the proposed structure features high speed, good accuracy, and low-power utilization. Thus, the adaptation of this structure in the proposed image processing system results in good hardware optimization. Moreover, the model was tested in different environments to test its robustness and versatility. This was done by implementing the model in different FPGAs. This shows that the model is best suited for portable image analyzing gadgets.

References

  1. Sweldens W: The lifting scheme: a custom-design construction of biorthogonal wavelets. Appl. Comput. Harmon. Anal. 1996, 3(2):186-200. 10.1006/acha.1996.0015

    Article  MathSciNet  Google Scholar 

  2. Daubechies I, Sweldens W: Factoring wavelet transforms into lifting schemes. J. Fourier Anal. Appl. 1998, 4(3):247-269. 10.1007/BF02476026

    Article  MathSciNet  Google Scholar 

  3. Acharya T, Chakrabarti C: A survey on lifting-based discrete wavelet transform architectures. J. VLSI Signal Process. 2006, 42: 321-339. 10.1007/s11266-006-4191-3

    Article  Google Scholar 

  4. Barua S, Carletta JE, Kotteri KA, Bell AE: An efficient architecture for lifting-based two-dimensional discrete wavelet transforms. Integr. VLSI J. 2005, 38(3):341-352. 10.1016/j.vlsi.2004.07.010

    Article  Google Scholar 

  5. Andra K, Chakrabarti C, Acharya T: A VLSI architecture for lifting-based forward and inverse wavelet transform. IEEE Trans. Signal Process 2002, 50(4):966-977. 10.1109/78.992147

    Article  Google Scholar 

  6. Shi G, Liu W, Zhang L, Li F: An efficient folded architecture for lifting-based discrete wavelet transform. IEEE Trans. Circuits Syst.-II 2009, 56(4):290-294.

    Article  Google Scholar 

  7. Huang CT, Tseng PC, Chen LG: Flipping structure: an efficient VLSI architecture of lifting based discrete wavelet transform. IEEE Trans. Signal Process. 2004, 52(4):1080-1088. 10.1109/TSP.2004.823509

    Article  MathSciNet  Google Scholar 

  8. Kim J, Park T: High performance VLSI architecture of 2D discrete wavelet transform with scalable lattice structure. World Acad. Sci. Eng. Technol. 2009, 54: 591-596.

    Google Scholar 

  9. Jiang W, Ortega A: Lifting factorization-based discrete wavelet transform architecture design. IEEE Trans. Circuits Syst Video Technol. 2001, 11(5):651-657. 10.1109/76.920194

    Article  Google Scholar 

  10. Zhang W, Jiang Z, Gao Z, Liu Y: An efficient VLSI architecture for lifting-based discrete wavelet transform. IEEE Trans. Circuits Syst.–II 2012, 59(3):158-162.

    Article  Google Scholar 

  11. Cheng C, Parhi KK: High-speed VLSI implement of 2-D discrete wavelet transform. IEEE Trans. Signal Process. 2008, 56(1):393-403.

    Article  MathSciNet  Google Scholar 

  12. Tian X, Wu L, Tan YH, Tian JW: Efficient multi-input/multi-output VLSI architecture for two-dimensional lifting-based discrete wavelet transform. IEEE Trans. Comput. 2011, 60(8):1207-1211.

    Article  MathSciNet  Google Scholar 

  13. Wu BF, Hu YQ: An efficient VLSI implementation of the discrete wavelet transforms using embedded instruction codes for symmetric filters. IEEE Trans. Circuits Syst. Video Technol. 2003, 13(9):936-943. 10.1109/TCSVT.2003.816509

    Article  Google Scholar 

  14. Zhang C, Wang C, Ahmad MO: A pipeline VLSI architecture for fast computation of the 2-D discrete wavelet transform. IEEE Trans. Circuits Syst.–I 2012, 59(8):1775-1785.

    Article  MathSciNet  Google Scholar 

  15. Lan X, Zheng N, Liu Y: Low-power and high-speed VLSI architecture for lifting-based forward and inverse wavelet transform. IEEE Trans. Consum. Electron. 2005, 51(2):379-386. 10.1109/TCE.2005.1467975

    Article  Google Scholar 

  16. Lee DU, Kim LW, Villasenor JD: Precision-aware self-quantizing hardware architecture for the discrete wavelet transform. IEEE Trans. Image Process. 2012, 21(2):768-777.

    Article  MathSciNet  Google Scholar 

  17. Beauchamp MJ, Hauck S, Underwood KD, Hemmert KS: Architectural modification to enhance the floating-point performance of FPGAs. IEEE Trans. Very Large Scale Integer (VLSI) Syst. 2008, 16(2):177-187.

    Article  Google Scholar 

  18. Ho CH, Yu CW, Leong PHW, Luk W, Wilton SJE: Floating-point FPGA: architecture and modeling. IEEE Trans. Very Large Scale Integer (VLSI) Syst. 2009, 17(12):1709-1718.

    Article  Google Scholar 

  19. Even G, Mueller SM, Seidel P-M: A dual precision IEEE floating-point multiplier. Integr. VLSI J. 2000, 29(2):167-180. 10.1016/S0167-9260(00)00006-7

    Article  Google Scholar 

  20. Yu CW, Smith AM, Luk W, Leong PHW, Wilton SJE: Optimizing floating point units in Hybrid FPGAs. IEEE Trans. Very Large Scale Integer (VLSI) Syst. 2012, 20(7):45-65.

    Google Scholar 

  21. Chong YJ, Parameswaran S: Configurable multimode embedded floating-point units for FPGAs. IEEE Trans. Very Large Scale Integer (VLSI) Syst. 2011, 19(11):2033-2044.

    Article  Google Scholar 

  22. Anand TH, Vaithiyanathan D, Seshasayanan R: Optimized architecture for floating point computation unit. In Int. conf. on emerging trends in VLSI, embedded sys., nano elec. and tele. sys. Thiruvannamalai, India; 2013:1-5.

    Google Scholar 

  23. Paul S, Jayakumar N, Khatri SP: A fast hardware approach for approximate, efficient logarithm and antilogarithm computations. IEEE Trans. Very Large Scale Integer (VLSI) Syst. 2009, 17(2):269-277.

    Article  Google Scholar 

  24. Paliouras V, Karagianni K, Stouraitis T: Error bounds for floating-point polynomial interpolators. IEE Electron. Lett. 1999, 35(3):195-197. 10.1049/el:19990143

    Article  Google Scholar 

  25. IEEE standard for floating-point arithmetic, IEEE Std754-2008. IEEE Inc, New York, NY, USA; 2008:1-70. doi:10.1109/IEEESTD.2008.4610935

  26. Vaithiyanathan D, Seshasayanan R: High speed low power DWT structure with log based FPU in FPGAs. In International conference on green computing, communication and conservation of energy (ICGCE 2013). Chennai, India; 2013:308-313. doi:10.1109/ICGCE.2013.6823451

    Google Scholar 

  27. Said A, Pearlman WA: A new fast and efficient image codec based on set partitioning in hierarchical trees. IEEE Trans. Circuits Syst. Video Technol. 1996, 6(3):243-250. 10.1109/76.499834

    Article  Google Scholar 

  28. Wheeler FW, Pearlman WA: SPHIT image compression without lists. IEEE international conference on acoustics, speech, and signal processing (ICASSP), vol. 4 2000, 2047-2050.

    Google Scholar 

  29. Corsonello P, Perri S, Staino G, Lanuzza M, Cocorullo G: Low bit rate image compression core for onboard space applications. IEEE Trans. Circuits Syst. Video Technol. 2006, 16(1):114-128.

    Article  Google Scholar 

  30. Jyotheswar J, Mahapatra S: Efficient FPGA implementation of DWT and modified SPIHT for lossless image compression. J. Syst. Arch. 2007, 53: 369-378. 10.1016/j.sysarc.2006.11.009

    Article  Google Scholar 

  31. Cheng CC, Tseng PC, Chen LG: Multimode embedded compression codec engine for power-aware video coding system. IEEE Trans. Circuits Syst Video Technol 2009, 19(2):141-150.

    Article  Google Scholar 

  32. Fry T, Hauck S: SPIHT image compression on FPGAs. IEEE Trans. Circuits Syst. Video Technol. 2005, 15(9):1138-1147.

    Article  Google Scholar 

  33. Jin Y, Lee HJ, Block-Based A: Pass-parallel SPIHT algorithm. IEEE Trans. Circuits Syst. Video Technol. 2012, 22(7):1064-1075.

    Article  Google Scholar 

  34. Zervas ND, Anagnostopoulos GP, Spiliotopoulos V, Andrepoulos Y, Goutis CE: Evaluation of design alternatives for the 2D-discrete wavelet transform. IEEE Trans. Circuits Syst. Video. Technol. 2001, 11: 1246-1262. 10.1109/76.974679

    Article  Google Scholar 

  35. Zhang C, Long Y, Kurdahi F: A hierarchical pipelining architecture and FPGA implementation for lifting-based 2-D DWT. J. Real-Time Image Proc. 2007, 2: 281-291. 10.1007/s11554-007-0057-6

    Article  Google Scholar 

  36. The USC-SIPI image database Univ. Southern California, signal and Inage processing inst. 2011. Available: http://sipi.usc.edu/database/

  37. Virtex-6 FPGA data sheet. Xilinx, Inc, San Jose, CA, USA; 2012. . Accessed 18 Feb 2013 http://www.xilinx.com/support/documentation/data_sheets/ds150.pdf

  38. Corsonello P, Perri S, Zicari P, Cocorullob G: Microprocessor-based FPGA implementation of SPIHT image compression system. Microprocessor and Microsystems 2005, 29(6):299-305. 10.1016/j.micpro.2004.08.013

    Article  Google Scholar 

  39. Chew LW, Chia WC, Ang L-M, Seng KP: Very low-memory wavelet compression architecture using strip-based processing for implementation in wireless sensor networks. EURASIP J. Embed. Syst 2009.

    Google Scholar 

  40. Liu K, Belyaev E, Guo J: VLSI architecture of arithmetic coder used in SPIHT. IEEE Trans. Very Large Scale Integer (VLSI) Syst. 2012, 20(4):697-710.

    Article  Google Scholar 

  41. DE2-115 FPGA board data sheet. Altera Corporation, San Jose, California, USA; 2010. . Accessed 15 March 2013 ftp://ftp.altera.com/up/pub/Altera_Material/12.1/Boards/DE2-115/DE2_115_User_Manual.pdf

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vaithiyanathan Dhandapani.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dhandapani, V., Ramachandran, S. Power-optimized log-based image processing system. J Image Video Proc 2014, 37 (2014). https://doi.org/10.1186/1687-5281-2014-37

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-5281-2014-37

Keywords