 Research
 Open Access
 Published:
Adaptive error protection coding for wireless transmission of motion JPEG 2000 video
EURASIP Journal on Image and Video Processing volume 2016, Article number: 10 (2016)
Abstract
The delivery of video over wireless, errorprone transmission channels requires careful allocation of channel and source code rates, given the available bandwidth. In this paper, we present a theoretical framework to find an optimal joint channel and source code rate allocation, by considering an intracoded video compression standard such as Motion JPEG 2000 and an errorprone wireless transmission channel. Lagrangian optimization is used to find the optimal code rate allocation, from a PSNR perspective, starting from commonly available source coding outputs, such as intermediate ratedistortion traces. The algorithm is simple and adaptive both on the available bandwidth and on the transmission channel conditions, and it has a low computational complexity. Simulation results, using ReedSolomon (RS) coding, show that the achieved performance, in terms of PSNR and MSSIM, is comparable with that of other methods reported in literature. In addition, a simplified and suboptimal expression for determining the channel code assignment is also provided.
Introduction
Many multimedia devices are being turned into complete entertainment centers, also by taking profit of wireless transmission. There exist several industrybacked liaisons aimed at transmitting wireless audio/video contents between multimedia home appliances using either the 60GHz band [1–3] or the 2.4–5.0GHz unlicensed spectrum [4, 5], using for this purpose techniques such as UltraWideBand, orthogonal frequency division multiplexing, and multiantenna links.
Moreover, JPEG 2000 [6] is rapidly spreading as a valuable intracoding scheme for video contribution applications [7] due to the high compression efficiency, wide coverage of encoding profiles from lossless to lossy, and the low latency. Recently the International Organization for Standardization (ISO), jointly with the International Electrotechnical Commission (IEC) and the International Telecommunications Union (ITU), added new profiles, to the JPEG 2000 standard, for broadcast video contribution and distribution with an amendment to the JPEG 2000 core coding system [8]. This amendment defines three new profiles, aimed at studio contribution links, specifying encoding parameters and rate limits over seven operating levels for video encoded with JPEG 2000. Even JPEG 2000 over MPEG2 Transport Stream is a recently standardized method suited to this scenario [9]. In this kind of application, wireless cameras may produce a video contribution that has to adapt, in real time, to timevarying transmission channel profiles. In such case, both the available bandwidth and the wireless link bit error rate (BER) may be considered slowly variable with respect to the video frame rate [10].
The streaming of video either directly over the physical layer or using IP packets, is subject to transmission errors. Retransmission of lost/corrupted data or packets is viable, but decreases interactivity and realtime requirements. Thus, forward error correction (FEC) is generally adopted, and the channel code rate may be matched to compressed data error sensitivity, performing unequal error protection (UEP) [11–13].
Many researchers have investigated optimal methods for the protection of intracoded video streams. For instance, JPEG 2000 for wireless (JPWL) [14] has been standardized for this purpose, and several works have shown its good performance either when used on IP networks [15–17] or directly over the physical layer [18–23].
In [24, 25], the authors addressed similar problems showing how ratedistortion optimized (RaDiO) audio/video streaming over packetized networks can be achieved, and they solved this problem using Lagrangian optimization. Cataldi et al. proposed a technique based on raptor codes and sliding windows, where different H.264 [26] code rates are associated to each quality layer [27]. In [28], the authors proposed WynerZiv coding for the protection of a coarse version of the video, where side information is provided by a primary H.26x decoder. Ahmad et al. proposed a UEP system using fountain codes [29]. In general, many of the solutions reported in literature for searching optimal UEP strategies are based on heuristic methods or use optimization algorithms [30–33]. It should be noted, however, that such solutions are based on search strategies characterized by a variable amount of computational complexity, which could prevent their use in realtime and bandwidthadaptive video transmission.
Review of recent works
In recent years, several researchers have investigated techniques capable to apply differentiated FEC levels to waveletbased image/video compressors, when the multimedia stream is delivered over unreliable or wireless channels.
In [34], a motioncompensated temporal filtering discrete wavelet transform (DWT) video coder is coupled with double binary turbo codes: the joint sourcechannel coding strategy is based on distortion profiling and code statistics. Also, Ho and Tsai [35] used 3Dwavelets, data interleaving, and ReedSolomon (RS) codes for their UEP streaming system. In [36], the authors investigate the performance of MJPEG 2000 and ratecompatible punctured convolutional codes for streaming over a timevarying binary symmetric channel (BSC); in their work, ratedistortion tradeoff of the coding units adapts the error correction code to the bandwidth and error characteristics. Schwartz et al. [37] adopted the DWTbased compression and convolutional coding FEC of CCSDS 122.0B1 and 131.0B1 satellite standards. Their results show that wavelet coefficient UEP outperforms the equal error protection (EEP) method over the simulated AWGN channel.
In [15, 38], the authors focus on JPEG 2000, RS coding, and interleaving over wireless channels, simulated by a timevarying BSC with GilbertElliot (GE) model. UEP is performed by variable FEC rates defined by solving a convex optimization problem. Based on interleaving effects, they derive a lower bound for successful image decoding rate in wireless environments. In [39], several UEP schemes are compared and layered JPWL streaming with RTP packetization on wireless channels is studied; the FEC allocation method is comparably faster and less complex than others, although yielding comparable quality.
JPWL has been shown to be both flexible and reactive to variable channel status. In [21, 40], streaming performance is simulated over realistic wireless channels, such as multipleinput multipleoutput or Rayleigh fading ones. In [17], the authors conjugate JPWL with a dynamic bandwidth estimation scheme in order to provide the best layering, scaling, and protection of video streams. Even if source distortion is coarsely estimated, it has been shown that it can be effectively used to find an optimal rate allocation that outperforms EEP [23].
In our previous papers [16, 41], we used JPWL and interleaving over lossy packet networks. The UEP solution, found by means of a recursive, dichotomic search algorithm, was shown to always outperform EEP, and a low complexity interleaving strategy was devised for JPWL implementation on a DSP device. Iqbal et al. [42, 43] devised a family of dynamic programming code allocation methods for FEC protected wireless video streaming. Their protection assignment can provide variable tradeoff between performance and implementation complexity.
A different view was adopted by Bahmani et al. [44]. The method devised by the authors operates mainly at the decoder side, leaving the particular UEP implementation open. By leveraging the error resilience features of JPEG 2000, their method guesses the erased received symbols and improves the error correction capability.
Ouaret et al. [45] compared RS coded JPEG 2000 to the SlepianWolf/WynerZiv distributed video coding (DVC) approach. Their results show that JPEG 2000 results in better quality at high error rates, even if only an EEP scheme was used, whereas DVC performs better at lower packet loss rates. In [46], JPEG 2000 and H.264 streams are protected with UEP and transmitted over lossy packet networks. A performance comparison with multiple description coding shows that UEP achieves better quality. Chen et al. [47], by using progressive digital fountain codes, allowed different users to receive broadcast video at different qualities, depending on the reliability of their UDPbased WiFi link. In [48], the authors describe the emerging MPEG multimedia transport standard for delivering high bit rate video over packetlossy networks, using a lowdensity generatormatrix FEC. They show the effectiveness of their method on the streaming of JPEG 2000 digital cinema.
Proposed contributions
In this paper, we present a simple mathematical and algorithmic solution for finding an optimal UEP channel code rate allocation strategy. The method used for solving the problem is based on Lagrangian optimization, which is known in the literature and has been applied in several works of other authors [25, 34, 36]. Our proposed solution can be calculated with a closedform expression, directly from the knowledge of the data units ratedistortion and of the transmission channel characteristics.
Differently from other existing strategies, such as those reviewed in Sec. 1.1, our method has a closedform representation for the solution to the UEP problem, and it does not require iterative or dynamic programmingbased strategies. Moreover, the presence of data interleaving enables optimal video quality when channel conditions are timevarying, and data and channeladaptivity can be fulfilled.
The devised method does not rely on a particular channel coding technique, since it can be applied universally to all blockbased FEC schemes. The algorithm itself is also lightweight, as it gives a closedform expression of the optimal channel code allocation strategy, which can be computed in real time and adaptively respond to changes in the available transmission bandwidth and experienced channel BER. This makes the technique suitable for channels with unknown and slowly changing error rate and, even, available bandwidth; the video stream receiver should communicate the experienced error rate to the sender side, which in turn would change the UEP solution accordingly. We also show how the algorithm can be practically applied, using JPEG 2000 source coding and RS channel coding, and present some performance results expressed in terms of either PSNR (peak signaltonoise ratio) or MSSIM (mean structural similarity index metric).
Some mathematical derivations and performance results shown in this paper have already been partially presented in [49]. However, in this paper, we present additional derivations and novel simulation results. First, we describe in detail a method for assigning codewords of a channel code and implementing the designed protection profile. Furthermore, we work out a formula that allows approximating the optimal protection profile without knowledge of the image content, but only by means of a statistical entropy approach. Finally, we also present some results on the simulated transmission of a complete video sequence.
The paper is structured as follows. First, the theoretical framework for optimized unequal error protection is presented in Sec. 2 and a Lagrangian optimization strategy is shown to be able to find a UEP solution, both for particular and general cases. Then, a practical UEP code rate assignment using RS codes is presented in Sec. 3. In Sec. 4, the Monte Carlo simulation setup is described, and the results of several simulated scenarios are presented and discussed. Eventually, conclusions are drawn, followed by an Appendix that contains proofs to assumptions and lemmas.
Optimized unequal error protection
In this paper, we model the distortion as a function of the combined channel characteristics/channel code performance in terms of probability of data loss. This approach is commonly used in the reference literature [16, 30]. We use Motion JPEG 2000 as source coding algorithm and RS as channel coding algorithm, but the same optimization technique can be applied to other source/channel coding combinations as well. Table 1 summarizes the mathematical notation that will be used throughout the following sections and in the appendices.
Since we are considering an intravideo coding method, the video stream is regarded as a sequence of frames that are compressed independently of each other [50]. Each frame is compressed using JPEG 2000, and the size of the compressed bitstream is of B bits or, equivalently, the source code rate is r _{ s } bits/pixel. The compressed bitstream is then divided in N _{ k } pieces, each one of (k−16) bits; a twobyte cyclic redundancy check (CRC) codeword, used to test for the errorfree condition, is appended at the end of each piece (see Fig. 1). The CRC codeword is considered in our method, even if it is not necessary when RS codes are used, since they are able to detect the presence of residual errors. However, for different types of channel codes (e.g., convolutional codes), the CRC codeword is required. In this work, the detection of eventual erasures is not managed differently than the detection of errors. The message words of the kbit long pieces are then RS encoded, and the codewords are assembled in n _{ i }bit long pieces, 0≤i<N _{ k }. If the average channel code rate is \(\bar {r}\), the combined source/channel code rate for the whole codestream will be \(R = r_{s} / \bar {r}\).
The packetized codestream is then transmitted over a Qary symmetric transmission channel, where Q is the number of available different transmission symbols, that is, the pieces are assembled in symbols of log2Q bits. We define p _{ i } as the probability that no errors occur up to piece i, and M _{ i } as the meansquare error (MSE) distortion of the decompressed image using all pieces up to piece i. The combined transmission channel performance and channel coding correction capability results in the probability h(n _{ l }) that a piece of n _{ l } bits (after channel decoding) is received with errors (residual error rate); many other researchers, in the past, have already used this model (e.g., [30]). In this paper, we assume that this probability is approximated by a loglinear model [51], such as
where C, s, and d represent the combined transmission channel/channel code performance characteristics. In particular, the parameter s is a decay connected to the correcting performance of the code, d is an offset to n _{ l } used to improve the validity of this model, and C is an amplitude normalization constant. Further details on this approximation are given in Appendix A.1 Loglinear approximation of residual BER curves. This approximation is correct when the channel error probability is lower than 10^{−1}, as for higher values, the loglinear relationship does not prove to be valid; however, in such cases, any practical rate allocation/FEC method is hardly operational without increasing the channel coding redundancy to a limit beyond which the video quality is severely impaired by the high decoding latency and the low source code rate. In particular, for the model presented in (1), it is found that a unique parameter, s, can be used to represent the transmission scenario, and its value increases as the combined transmission channel/channel code performance improves (see Appendix A.1 Loglinear approximation of residual BER curves). At the receiving side, the probability of having no errors up to piece i depends on the chosen sequence of code word lengths (up to piece i) as p _{ i }=p _{ i }(n _{0},n _{1},…,n _{ i }). The optimization objective is that of minimizing the average MSE distortion, given that a constant amount of combined source and channel coding bits is sent on the channel, i.e.,
Lemma 1.
The optimal UEP profile, i.e., the set of codeword lengths {n _{ i }} to be used for the pieces, which minimizes the overall distortion on the received image, is given by
where \(\bar {n}=k/\bar {r}\) is the average protected piece length, m _{ i } is a complementary cumulative distortion (CCD), \({m_{i}} =\! \Sigma _{j = i}^{{N_{k}}  1}{M_{j}}\), and \(\hat {m}\) is the CCD geometric mean \(\hat {m} = {\left (\Pi _{i = 0}^{{N_{k}}  1}{m_{i}}\right)^{1/{N_{k}}}}\).
Appendix A.2 Proof of Lemma 1 shows how the closedform expression (3), mathematical solution of (2), can be obtained. The relationship in (3) can be commented as follows:

The protection rate for piece i depends on the average protection rate \(\bar {n}\) plus a modification term.

The modification term logarithmically weights the CCD at piece i, normalized by the geometric mean of the CCD.

If the channel code has higherror correction performance and/or the channel conditions are good, the parameter s is large, which gives a small modification term.

The protection profile depends on the equivalent transmission channel conditions only by means of the parameter s, not C and d.

Since m _{ i } is monotonically not increasing and ln(·) is a monotonic function, the modification term is monotonically not increasing, i.e., pieces at the beginning of the bitstream are more protected than those at the end.

The protection level at piece i depends on the cumulative amount of distortion of all following pieces.

The shape of the protection profile is determined by the CCD. Ordinate extrema are defined only by the channel/code combined performance.
With respect to other similar solutions presented in the literature, and described in Sec. 1.1, the main advantage of our method is that a closedform solution to the UEP problem is readily available, without needing iterative or dynamic programmingbased solving strategies. The proposed solution is data and channel adaptive; regular, low bit rate feedback from the receiver lets the transmitter modify the UEP strategy, which, considering also the presence of data interleaving, enables optimal decompressed video quality when channel conditions change with time. Moreover, the side information produced at no cost during the compression process allows implementing a wellcrafted protection profile, which minimizes the expected amount of distortion due to missing or corrupted data at the receiver. Eventually, we also want to outline that when stringent realtime requirements are needed, and the wireless channel is timevarying, deep interleaving matrices are necessary to intersperse the symbol losses occurring in the channel far away (for high Doppler spread f _{ D }), and the decoding delay increases correspondingly.
Upon looking carefully at the solution (3) proposed in Lemma 1, it can be noticed that an additional condition to be satisfied is that n _{ i }≥k, 0≤i<N _{ k }, meaning that we cannot overprotect the pieces at the beginning, since there would be not enough bits to allocate for the last pieces, not even the source coding bits; especially at high symbol error probabilities, the protection profile, given the total bit budget, could be extremely unbalanced and might provide values lower than k.
Lemma 2.
The minimum average channel code rate \(\bar {r}_{\text {min}}\), expressed in terms of minimum average piece size after channel coding, is approximated by
for a given equivalent channel error performance s.
Lemma 2 can be proved after some work on (3) and supposing that \(\ln \left (m_{i} / \hat {m} \right) < 0\), for large i close to N _{ k }.
In all cases where the exact ratedistortion curve of the compressed image is not known or cannot be calculated exactly, an approximation of (3) can be done, if the source coding process is expected to generate an ideal progressive codestream with a typical ratedistortion curve.
Lemma 3.
The approximated protection profile for progressively source encoded codestreams is given by
for N _{ k } large, where Δ ρ is the bit rate sampling step of the MSE profile.
Proof of this lemma is given in Appendix A.3 Proof of Lemma 3.
Our proposed UEP method has been devised in order to be as general as possible, with application scenarios that can extend also to other source and channel coding methods. For instance, considerations on the distortion reduction carried by data packets may be similarly done also for the network abstraction layer (NAL) units used in H.264 or H.265. In this case, NAL units naturally segment the video stream in pieces for which there is a correspondence with the pieces used in Fig. 1. The computation of the distortion profile can be achieved in several ways, if temporal and spatial frames intra/interdependency is maintained or not; layering methods for postcompression reordering of NAL units, similar to those adopted by the scalable video coding (SVC) extension of H.264 are then possible [52].
UEP profile generation using RS coding
In the scenario adopted in this paper, when RS coding is used, a strategy must be devised for the practical implementation of the optimal UEP profile found with (3). First, we consider a list of \(\tilde {N}\) mother code rates \(\{\tilde {r}_{l}\} = \{(\tilde {k}_{l} / \tilde {n}_{l}) \}\), \(l = 0, 1, \ldots, \tilde {N}  1\), ordered by decreasing code rate and not containing repeated code values, i.e., \( \tilde {r}_{l} > \tilde {r}_{l + 1}\), \(j = 0, 1, \ldots, \tilde {N}  2\).
Given the actual code rate for a generic piece i, r _{ i }=k _{ i }/n _{ i } (it was previously assumed that pieces are of fixed length, i.e., k _{ i }=k, thus this is a generalization), an index w _{ i } can be determined, such that \( \tilde {r}_{w_{i} + 1} \le r_{i} \le \tilde {r}_{w_{i}} \). The number α _{ i } of RS codewords with rate \(\tilde {r}_{w_{i}}\), and β _{ i } of codewords with rate \(\tilde {r}_{w_{i} + 1}\) is chosen in such way to achieve the target code rate r _{ i } for the piece i. Due to the constraints
the number of codewords for each code is
Positivevalued solutions always exists for (6), since the constraint \({\tilde {r}_{w_{i}+1}} \le {r_{i}} \le {\tilde {r}_{w_{i}}}\) gives
and the constraint \({\tilde {r}_{w_{i}}} > {\tilde {r}_{w_{i}+1}}\) gives
The values of α _{ i } and β _{ i } obtained from (6) are fractional numbers. In order to best approximate them with integer numbers, we first compute α _{ i } with rounding, and then use this result to compute β _{ i }, as
System simulation and performance
Simulation setup
For purposes of assessing the performance of the technique presented in this paper, we have prepared a 512frame video composed by the first 32 frames of each of the following 16 clips with CIF resolution (352 × 288, 30 frames/s) and YUV 4:2:0 format, combined in sequence: akiyo, bus, coastguard, crew, flower, football, foreman, harbor, husky, ice, news, soccer, stefan, tempete, tennis, and waterfall [53]. Only the luminance (Y) component of the video frames has been used to perform the optimization strategy and the transmission and reception over a simulated channel.
First, the partial distortions M _{ i } for each frame in the video sequence have been calculated. To this purpose, JPEG 2000 compression has been performed using Kakadu 6.0 [54], with default parameters, no visual weighting, and the ‘rate’ option on every frame. The portion of each JPEG 2000 codestream located after the startofdata (SOD) marker has been split into multiple pieces, each one with a size of (k−2) bytes (after CRC insertion, the piece will be of k bytes). Then, a new codestream has been constructed using the original header data, with an amended startoftile (SOT) marker to account for the new codestream length, a number i of codestream pieces, and the endofcodestream (EOC) marker. The obtained codestream has been decompressed using Kakadu 6.0, and M _{ i } has been calculated as the distortion of the reconstructed frame. Although this process of determining M _{ i } is cumbersome, it should be said that the JPEG 2000 encoding process is able, per se, to provide such values; during the encoding procedure of JPEG 2000, an accurate ratedistortion estimation of the compressed frame is calculated, since the distortion values are gathered for the selection of the compressed wavelet coefficients with embedded block coding optimized truncation (EBCOT) of the bitstream [55]. In this work, we have favored a direct calculation of the distortion values, in order to achieve more precise results.
When the piece boundaries are not coincident with the codestream interruption points decided by the JPEG 2000 compressor, we adopt a continuum hypothesis, i.e., we assume that the intermediate distortions at the piece boundaries can be calculated using linear interpolation from the nearest known, available distortions. This assumption is generally valid, since JPEG 2000 is a positionprogressive encoder, and distortions are related to the way wavelet coefficients are truncated by EBCOT, in order to best satisfy the qualityrate constraints imposed on the compression process.
Moreover, in order to make a fair comparison among different channel code rates, we have kept fixed the total amount of data sent on the channel, i.e., the combined source and channel code rate R.
The transmission of the compressed stream has been simulated, using MATLAB, on three different types of channels. The first type is a Qary symmetric channel (Q=256), characterized by symbol error rates P _{ S } ranging from 10^{−3} to 10^{−1}. In this type of channel, errors occur at a symbollevel; since the bit errors are equiprobable and uniformly distributed over the symbol bits (log2Q=8 bits/symbol), there is a simple relationship between bit and symbol error rates when the number of bits per symbol is large, i.e., P _{ b }≅P _{ S }/2 ([56] Section 4.41).
The second type of channel uses binary phase shift keying (BPSK) and additive white Gaussian noise (AWGN), in order to represent a transmission condition akin to physical level signaling on an actual, but ideal, channel. In this case, the performance depends on the signaltonoise ratio (SNR) expressed by γ _{ b }. Finally, the last type of simulated channel uses BPSK, AWGN, and Rayleighdistributed flat fading, which represents a condition similar to that experienced on actual, wireless, nonlineofsight (NLOS) channels. In this case, the performance depends on the SNR γ _{ b } and on the correlation degree among fades, expressed by the Doppler spread f _{ D }. For channels using BPSK, the expressions used to determine the average BER (and the corresponding channel parameter s), given a certain value of γ _{ b }, are
where Q(·) is the Gaussian tail function, and the Rayleigh channel BER is calculated for the maximum uncorrelated Doppler spread [56].
The UEP profile has been generated using (3), given the distortion profile M _{ i } and the channel parameter s. Then, the codestream has been split into pieces that have been protected according to the determined UEP profile, using RS coding with \(\tilde {N} = 24\) mother code rates \(\left \lbrace \tilde {r}_{l} \right \rbrace = \left \lbrace 32/36, 32/38, 32/40, \ldots, 32/80 \right \rbrace \), and adopting the codeword allocation strategy given by (7). The effect of the channel is simulated by randomly changing the transmitted bytes according to the simulation symbol error rate P _{ S }. The erroraffected codestream has been recomposed by terminating it at the last errorfree received piece (thanks to the CRC codeword). In this way, any image reconstruction artifact due to wrong/erased codestream bytes has been eliminated, and the reconstructed image MSE is that used by the UEP allocation strategy. The JPEG 2000 header (about 300 bytes) has been considered as transmitted on a reliable channel, since it represents the most critical section of the codestream. At the receiving side, the JPEG 2000 header has been prepended to the JPEG 2000 bitstream bytes, and only the portion of the header carrying information on the bitstream size (Psot field of the SOT marker) has been changed accordingly. Performance has been evaluated as objective visual quality, and YPSNR has been used as objective quality indicator. In addition, we used MSSIM to faithfully represent the subjective evaluation by a human observer. The overall performance has been calculated by averaging the PSNR and MSSIM values calculated at each frame of the video sequence. The performance of the UEP method has been directly compared with that of an EEP method.
Additionally, comparisons with existing techniques in literature have been done using a static reference image, the 512 × 512 pixel grayscale version of lena, compressed at a total bit rate (joint source and channel code rate) of R=0.5 bits/pixel. For each simulated channel error rate, at least 100 independent transmissions of the image have been repeated, and the results averaged. Since both the video sequence and the static image cases cover a standard definition application scenario, we have also used a static image frame from the highdefinition 1920 × 1080 pixels crowdrun RGB sequence [53], in order to show some properties of the calculated UEP profiles.
Simulated performance results
Performance for static images on BSC
We first report the performance obtained with static images. Both images (lena and crowdrun) were compressed to a total rate of R=2.5 bits/pixel, comprising both the source and the channel code bits.
Figure 2 (top) presents the UEP profile for lena, calculated with (3) at an average channel code rate of \(\bar {r}=32/44\), and a channel symbol error probability P _{ S }=5×10^{−2}. The UEP profile, represented by the solid line, is shown in terms of the protected piece length n _{ i } versus the piece index i; the average protected piece length \(\bar {n}\), coincident with the EEP profile, is represented by the dashed line. Figure 2 (bottom) shows the equivalent case for crowdrun. In both cases, the piece length is k=1 600 bytes, and the channel parameter is s=0.0127 (see Appendix A.1 Loglinear approximation of residual BER curves and Table 3). The dotdashed line in Fig. 2 represents the protection profile obtained using (5), at rate steps corresponding to the situation illustrated so far. The UEP profiles begin with a high protection level (low code rate), which then gradually decreases as the index of the piece increases, as expected.
Figure 3 shows how the UEP profile is practically generated, for lena, using a proper combination of the \(\left \lbrace \tilde {r}_{l} \right \rbrace \) mother codes and with a number of codewords α i′ and β i′, for each pair of codes, as given by (7). The top subplot shows the mother code rates \(\tilde {r}_{w_{i}}\) and \(\tilde {r}_{w_{i}+1}\) used in each ith piece, expressed in terms of codeword size \(\tilde {n}_{w_{i}}\). The bottom subplot describes the normalized amount \(\delta _{\alpha '_{i}}\) of codewords with rate \(\tilde {r}_{w_{i}}\) with respect to the total number of codewords used in the ith piece,
Clearly, (1−δ _{ α,i }) is the normalized amount of codewords with rate \(\tilde {r}_{w_{i} + 1}\).
Figure 4 shows the UEP profile n _{ i } obtained for several values of the channel symbol error rate P _{ S } (from 10^{−3} to 10^{−1}) and an average code rate \(\bar {r} = 32/46\), for lena. The used message word size is k=1 600 bytes, which corresponds to the floor of the plot. For low values of error rates, the protection profile is almost linear, meaning that an EEP solution is optimal. On the other side, at higher error rates, the profile climbs over, at the beginning, and submerge below, toward the ending, the average protection level. At even higher error rates, the average protection is not sufficient for keeping the profile above the message size floor, and an increased protection rate is requested for correct operation of the algorithm.
Figure 5 shows the minimum required protected piece average size \(\bar {n}_{\text {min}}\) given by (4), plotted versus the channel symbol error rate P _{ S } and the total bit rate R, using a message word size of k=1 024 bytes and the crowdrun image. The plot has been generated considering also the channel code rate in the total bit rate. We outline that, using this relationship, the system can adaptively respond to variations of the conditions of the transmission channel and of the available bandwidth, by keeping in all cases the received video quality at the optimal level. Given a channel error rate, the minimum average protection slightly increases as the available bandwidth increases, thus meaning that higher protection is required to achieve the optimal UEP profile for an error rate P _{ S }. It is also evident how, when the available bandwidth increases, the algorithm selects progressively higher levels of protection to maximize image quality. Clearly, in order to achieve the optimal profile for a given error rate, there must be some knowledge of the channel status even at the transmitter. Thus, the receiver must be able to calculate an estimate \(\tilde {P}_{S}\) of the current channel symbol error rate and feed this information back to the transmission side. If channel conditions are slowly varying with respect to the information exchange rate, then this feedback may happen with minimum signaling requirements. Since our method employs the secure delivery of the JPEG 2000 header part (using a reliable transmission technique), these data can be repeated periodically (e.g., 1–2 times per second) on the same channel and used as pilot information for estimating \(\tilde {P}_{S}\).
The performance of our method has been measured using the PSNR and MSSIM quality metrics. In order to compare the results with similar methods referenced in [16, 57, 58], Monte Carlo simulations have been done at a total bit rate (joint source and channel code rate) of R=0.5 bits/pixel, and the metrics have been averaged. Figure 6 shows the achieved performance plotted versus the actual bit error probability P _{ b } before channel decoding at the receiver side. The simulations have been done with a total of nine different configurations: three possible piece length values k (512, 1 024, and 1 600 bytes) and three different average channel code rates \(\bar {r}\) (32/40, 32/44, and 32/48). In all the presented cases, it can be seen that shorter values of k allow achieving slightly higher PSNR/MSSIM values, for the same error rate on the channel.
The results summarized in Table 2, for the case \(\bar {r} = 32/40\) and k=512, show that the achieved PSNR is comparable or better than that obtained in [16, 57, 58]. There is an exception in the comparison with [58], which presents a higher PSNR. In that work, the authors used turbo codes with codewords longer than those used in our work, resulting in improved error correction capability. However, we want to outline that our algorithm is designed to find optimal UEP profiles, and using different errorcorrecting codes would result in different final PSNR values.
Performance for video sequence on BSC
As for the performance obtained on the video sequence, Fig. 7 shows the quality metric indicators history along the N _{ F }=512 frames of the test video. The piece length is k=1 024 bytes, the average code rate is \(\bar {r} = 32/48\), and the total rate is R=2.5 bits/pixel. Given the video frame rate (30 frames/s) and resolution (352 × 288), this corresponds to a transmitted bit rate of nearly R _{ b }=7.6 Mbit/s. The solid darkgreen line (MAX) represents the maximum theoretically achievable PSNR at the receiver; it is due only to the compression artifacts introduced by the JPEG 2000 lossy source encoding. The performance indicators have been measured in the following way. First, the MSE ε _{ i } of the luminance component of every decoded frame has been converted into logarithmic PSNR as Γ _{ i }=10 log10(1/ε _{ i }) and plotted versus the frame number i. Then, the average MSE has been calculated using the arithmetic mean of all MSEs, \(\bar {\epsilon } = \left (1/N_{F}\right)\sum _{i=0}^{N_{F}  1}\epsilon _{i}\). Finally, the average PSNR is calculated from the average MSE as \(\bar {\Gamma }=10 \log _{10}{\left (1/\bar {\epsilon }\right)}\). For the MSSIM, the values ι _{ i } obtained at each frame have been plotted versus the frame number i and arithmetically averaged as \(\bar {\iota } = \left (1/N_{F}\right)\sum _{i=0}^{N_{F}  1}\iota _{i}\). Also in this case, the actual BER P _{ b }, before channel decoding at the receiver side, is calculated and used to compare the performance.
The PSNR and MSSIM histories Γ _{ i } and ι _{ i } plotted in Fig. 7 are obtained on a simulated channel with an error rate of P _{ S }=7×10^{−2}, equivalent to a bit error rate of P _{ b }=3.5×10^{−2}. The red line depicts the quality of the received UEP frames after decompression, with additional artifacts due to the errors introduced by the loss of pieces during transmission on the channel. For comparison purposes, we have also reported the quality of an EEP profile (blue line), having the same total rate R. The average performance of the UEP method results in a PSNR of \(\bar {\Gamma }=25.3\) dB, while the PSNR of the EEP method is of \(\bar {\Gamma }=19.6\) dB. Similarly, we have an MSSIM of \(\bar {\iota }=0.84\) and \(\bar {\iota }=0.66\), for the UEP and EEP methods, respectively.
Figure 8 shows the obtained average PSNR \(\bar {\Gamma }\) and MSSIM \(\bar {\iota }\) for different values of channel SER P _{ S }, varying from 2×10^{−2} to 10^{−1} (P _{ b } varying from 10^{−2} to 5×10^{−2}). The results are plotted both for the UEP and EEP cases. The curves show that the proposed UEP method outperforms the EEP method, in terms of PSNR, up to nearly 7 dB for P _{ S }=10^{−1} (P _{ b }=5×10^{−2}). It is also evident that the UEP method begins to provide better results than EEP starting from a SER of P _{ S }=3×10^{−2} (P _{ b }=1.5×10^{−2}), while for values of SER lower than P _{ S }=2×10^{−2} (P _{ b }=10^{−2}) the two methods are equivalent. Similar considerations can be declared for the MSSIM. In this case, the quality index of UEP begins to improve over the EEP one for SER higher than P _{ S }=4×10^{−2} (P _{ b }=2×10^{−2}); the protection advantage given by UEP over EEP is thus only slightly reduced when considering this more subjectivequality related metric.
Figure 9 shows the a posteriori cumulative distribution function (CDF) of the perframe MSE, F _{ E }(ε), defined as the computed probability that the MSE of a decompressed frame is lower than ε. The figure plots the CDFs for UEP and EEP cases, and for two different values of channel SER, P _{ S }=5×10^{−2} (P _{ b }=2.5×10^{−2}) and P _{ S }=8×10^{−2} (P _{ b }=4×10^{−2}). By setting, for instance, a threshold probability of 0.9, we find that, for the higher SER, the UEP MSEs are lower than 0.009 whereas for EEP they are lower than 0.05. Moreover, for every threshold probability, the UEP curves always give lower MSEs than the UEP curve. Similarly, at the lower SER, the 90 % threshold values are 0.0017 and 0.0025 for UEP and EEP, respectively. However, in this case, for a probability of 79 % and MSE of 0.0008, the two curves cross each other. This peculiar fact can be explained in the following way: in the EEP case, there are few values of ε _{ i } that are much worse than the worst values obtained in the UEP case. On the contrary, UEP gives a lot of ε _{ i } values that are somewhat lower than those of EEP ones, but never get much worse than that. This fact is mainly responsible for the improved average PSNR \(\bar {\Gamma }\) exhibited by the UEP method over the EEP method in Fig. 8.
A proof that this phenomenon affects the decoded video quality is also given by Fig. 10. The figure shows the measured probability that no decoding happens at all in the received compressed video frame, due to the presence of uncorrectable errors in the first piece (i=0). Both methods can successfully decode at least the first packet up to a SER of P _{ S }=4×10^{−2} (P _{ b }=2×10^{−2}). For larger BERs, the UEP method attains a maximum of about 5 % probability of no decoding, whereas the EEP method failure probability is an order of magnitude higher and grows up to 80 %. Figure 11 shows two decompressed frames (frames no. 6 and 360) obtained during the transmission on a channel with an error rate of P _{ S }=7×10^{−2} (P _{ b }=3.5×10^{−2}). In this case, the error sequence has been exactly the same for the UEP and the EEP methods. Figure 11 a and c display the frames obtained with EEP, while Fig. 11 b and d present the frames obtained with UEP. Simulation results and samples of the original and decompressed video clips are available for download [59].
Performance on AWGN and Rayleigh channels
The performance of the proposed system has been also verified using the AWGN and uncorrelated (f _{ D }≈R _{ b }) Rayleigh flat fading channels, for the video sequence only (without loss of generality, the results apply also to the static images case). Figure 12 shows the average PSNR and MSSIM versus the average channel SNR γ _{ b }. For both types of channels, UEP outperforms EEP. This is not surprising, as the proper combination of interleaving depth and channel coding results in a equivalent BSC channel, which we have already simulated. In case of correlated Rayleigh fading (f _{ D }<R _{ b }), the bit interleaver size has to be chosen to span over an amount of bits such that, after deinterleaving, the fades are practically uncorrelated. If transmission on a channel with Doppler spread f _{ D } adopts an N _{row}×N _{col} block interleaver, then, after deinterleaving, the equivalent Doppler spread becomes N _{row} times higher, N _{row} f _{ D } [60]. Thus, by properly choosing the interleaver dimension N _{row}, one can revert to the condition of uncorrelated Rayleigh fading, for which the performance is plotted in the rightside curves of Fig. 12. If, on the other side, the Doppler spread is so low (such as it happens on nearly static NLOS channels, f _{ D }≪R _{ b }) that the interleaver size should exceed the available memory or the decoding delay bounds, then the periodic feedback from the receiver would allow the transmitter to adapt the protection profile and coding rate at the measured channel conditions. In this case, in the short term, the performance will be practically that of the AWGN case, for which the curves on the left side of Fig. 12 apply.
Computational complexity
The optimization problem requires the knowledge of the distortion profile of the image. Using JPEG 2000 compression, the partial distortions M _{ i } (and the CCD m _{ i }) can be easily obtained during the rate allocation step of the JPEG 2000 bitstream preparation [55]; thus, these values can be obtained virtually at no cost.
For the calculation of the C, s, and d coefficients, a lookup table (LUT) can be used to store the parameters, for different values of the packet size N _{ k }, of the channel bit/symbol error rate P _{ b }/ P _{ S }, and possibly even for different channel coding algorithms (e.g., convolutional, binary RS, lowdensity parity check codes). The LUT can then be accessed to provide the parameters that will be used in the protection profile generation, with a large saving with respect to storing the entire UEP profile, for each combination of the three variables. As for generating the coefficients stored in the LUT, they can be calculated offline and smoothly interpolated to provide all the intermediate values that could be requested by the system.
The calculation of the optimal protection profile in (3) depends on the geometric mean of the CCD function. In order to avoid overflow or underflow problems due to floating point operations rounding during the computation, the geometric mean should be calculated logarithmically as
in which case it takes N _{ k } logarithms, (N _{ k }−1) additions, 1 division, and 1 exponentiation to be computed. Then, we need (N _{ k }−1) additions for the computation of the CCD function, N _{ k } multiplications for the logarithm operand (one division to obtain the inverse of the geometric mean), N _{ k } logarithm operations, N _{ k } multiplications for logarithm result scaling (one division to obtain the inverse of s, if not already saved in this form in the LUT), and N _{ k } additions. In summary, to implement (3), a total of (3N _{ k }−2) additions, 2N _{ k } multiplications, 2N _{ k } logarithms, 3 divisions, 1 exponentiation is needed. Assuming that natural logarithms and powers of e can be implemented by means of another LUT, with a sufficient precision once the dynamic ranges of the operands have been characterized, the asymptotic complexity becomes \(\mathcal {O}\left (N_{k}\right)\). Differently, the solutions presented in [16] or in [58] require multiple evaluations of expressions similar to (2), which are more cumbersome to calculate from the computational viewpoint.
Conclusions
The transmission of video over errorprone wireless channels is a problem that can be solved by using an adequate error protection layer added to the streams, once the characteristics of the channel are known. In this work, we have presented a UEP strategy devised to allocate channel code bits, using an optimization algorithm that is computationally light during the search of the UEP profile. The general formulation of the problem has been solved using a Lagrangian minimization method. The discovered closedform UEP expression requires already available data, such as the image ratedistortion curve, the average error protection code rate, the typical allowed packet size for transmission, and the channel error model (represented by one parameter). In addition, we have also presented a practical method for implementing the discovered UEP profile using RS codes. The simulated performance of the proposed UEP strategy shows that the results outperform those obtained using an EEP strategy, that they are comparable with the UEP performance results of other methods presented in similar works, yet having a lower computational complexity, and that this UEP method can be used to effectively counteract the impairments introduced by an errorprone transmission channel.
Appendix A
A.1 Loglinear approximation of residual BER curves
The expression (1), derived from a more general expression found in [51], approximates the functions h(n _{ l }) with exponentials, at least in definite regions where h(·) is lower than 10^{−1}, which is a common requirement. In order to show the validity of this approximation, we have simulated the performance of RS error coding applied to the devised packetization scheme. Each piece of k bytes has been split in message words of \(\tilde {k}= 32\) bytes and a RS code with rate \(\tilde {n}/\tilde {k}\) has been applied to each word. Then, the resulting codewords have been concatenated, producing a piece of n bytes. Multiple pieces have been transmitted over an Qary channel (Q=256) with a defined symbol error rate P _{ S }, and the residual error probability after decoding, h(n), has been measured. For a fixed P _{ S }, the value of n has been increased and the measurement on the residual error rate performed again. This procedure has been repeated for several different values of P _{ S }. Figure 13 shows the set of residual error probability curves obtained adopting a piece length k=1 600 bytes, for channel symbol error rates from P _{ S }=10^{−3} (P _{ b }=5×10^{−4}) to P _{ S }=10^{−1} (P _{ b }=5×10^{−2}). Similar sets of error rate curves have been obtained for different lengths of the pieces. The resulting curves have been fit using a nonlinear leastsquares method in the logarithmic domain, thus providing the relevant model parameters; Table 3 lists the parameters C, s, and d for several piece lengths and channel symbol error probabilities P _{ S }.
In Fig. 13, solid lines represent the results of simulations, whereas the dashed lines are obtained by evaluating (1) with the bestfit model parameters of Table 3, for every channel symbol error rate.
The following can be said on C, s, and d parameters:

The amplitude C is generally much lower than 1.

The decay s increases as the channel/channel code performance improves.

The offset d is the point where the error rate curve becomes linear, and is higher than 10^{−2}.

d is greater than the values of k that we have used.
For other types of transmission methods, such as, for instance, BPSK on AWGN or Rayleigh fading channels, if a bit/byte interleaver interspersing consecutive errors is present, then the results and comments discussed above are still valid.
A.2 Proof of Lemma 1
Proof.
The constrained minimization problem expressed by (2) can be solved using the Lagrange multipliers method, as
First, we simplify the probability of having no received errors up to piece i, p _{ i }=p _{ i }(n _{0},n _{1},…,n _{ i }). We suppose that this probability is expressed by the product of correct reception probabilities for each piece, since they are independently decoded of each other, as
where h(n _{ l }) is the probability that a piece of k bits (n _{ l } bits after channel encoding) is received with errors. Under such conditions, products in (13) are approximated as
since products of h(n _{ l }) terms can be neglected. Supposing that all h(n _{ l }) have the same value, it can be found that when h(n _{ l })<2×10^{−2}, the approximation (14) is valid with an error lower than 10 %.
After substituting (14) in (13) and differentiating, we obtain
With the approximations (1) and (15), and neglecting C ^{2} (since C≪1), (12) becomes
The summation at the denominator of (16) is the CCD \({m_{i}} = \Sigma _{j = i}^{{N_{k}}  1}{M_{j}}\), which is a nonincreasing function in the i variable (as i increases, there are less M _{ j }’s to sum). Then, n _{ i } is found to be
In order to find the Lagrange multiplier λ, we use the constraint in (2). After some work, the constraint becomes
where \(\hat {m} = {(\Pi _{i = 0}^{{N_{k}}  1}{m_{i}})^{1/{N_{k}}}}\) is the geometric mean of the CCD, \(T = B(\bar {n}  k)/k\), and D=N _{ k }(d−k). We can substitute (18) into (17), and find the closed form solution to the minimization problem
A.3 Proof of Lemma 3
Proof.
If the exact RD curve of the image is not known, it is still possible to calculate an approximate UEP profile considering the MSE profile, sampled at Δ ρ bit/symbol steps, using the bounds
0≤i<N _{ k }, where the lower bound expresses the differential entropy H of the actual source, and the upper bound is calculated considering the hypothesis of Gaussian source (with encoded image coefficients that are Gaussian distributed with variance σ ^{2}), respectively.
We start by expressing the CCD, for both bounds in (20), as
where either K=(2^{2H−1}/π e) or K=σ ^{2}. Considering i≪N _{ k } and N _{ k } large, (21) can be approximated by m _{ i }≅K2^{−2Δρi}/(1−2^{−2Δρ}). The geometric mean is expressed and approximated by
when N _{ k } is large. Eventually, the approximated protection profile \(n^{\prime }_{i}\) is given by
References
H Singh, J Oh, C Kweon, X Qin, HR Shao, CY Ngo, A 60 GHz wireless network for enabling uncompressed video communication. IEEE Commun. Mag.46(12), 71–78 (2008). doi:10.1109/MCOM.2008.4689210.
CJ Hansen, WiGiG: Multigigabit wireless communications in the 60 GHz band. IEEE Wireless Commun. 18(6), 6–7 (2011). doi:10.1109/MWC.2011.6108325.
S Yong, CC Chong, An overview of multigigabit wireless through millimeter wave technology: potentials and technical challenges. EURASIP J. Wirel. Commun. Netw. 2007(078907) (2007). doi:10.1155/2007/78907.
G Lawton, Wireless HD video heats up. IEEE Computer. 41(12), 18–20 (2008). doi:10.1109/MC.2008.509.
S Srinivasan, in Proc. of 20th Int. Conf. on Computer Commun. and Networks (ICCCN). An assessment of technologies for inhome entertainment (IEEEMaui, Hawaii, 2011), pp. 1–6. doi:10.1109/ICCCN.2011.6005803.
ISO/IEC, JPEG 2000 image coding system – part 1: core coding system. ISO/IEC ISO/IEC 154441 (Int. Standards Org./Int. Electrotech. Comm., Geneva, Switzerland, 2004).
FO Devaux, C De Vleeschouwer, Parity bit replenishment for JPEG 2000based video streaming. EURASIP J. Image Video Process.2009(1), 683820 (2009). doi:10.1155/2009/683820.
ISO/IEC, JPEG 2000 profiles for broadcast applications (ISO/IEC 154441:2004/Amd 3:2010, Int. Standards Org./Int. Electrotech. Comm., Geneva, Switzerland, 2010).
ISO/IEC, ISO/IEC 138181:2007/FPDAM 5, Int. Standards Org./Int. Electrotech. Comm., (Geneva, Switzerland, 2010).
S Pejoski, V Kafedziski, in Proc. of 5th European Conf. on Circuits and Sys. for Commun. (ECCSC). Causal video transmission over fading channels with full channel state information (IEEEBelgrade, Serbia, 2010), pp. 294–297.
M Etoh, T Yoshimura, Advances in wireless video delivery. Proc. IEEE. 93(1), 111–122 (2005). doi:10.1109/JPROC.2004.839605.
A Albanese, J Blomer, J Edmonds, M Luby, M Sudan, Priority encoding transmission. IEEE Trans. Inf. Theory. 42(6), 1737–1744 (1996). doi:10.1109/18.556670.
R de Albuquerque, D Cunha, C Pimentel, On the complexityperformance tradeoff in softdecision decoding for unequal error protection block codes. EURASIP J. Adv. Signal Process.2013(1), 28 (2013). doi:10.1186/16876180201328.
ISO/IEC, JPEG 2000 image coding systempart 11: wireless (ISO/IEC WD2.0 1544411, Int. Standards Org./Int. Electrotech. Comm., Geneva, Switzerland, 2003).
M Agueh, JF Diouris, M Diop, FO Devaux, C De Vleeschouwer, B Macq, Optimal JPWL forward error correction rate allocation for robust JPEG 2000 images and video streaming over mobile ad hoc networks. EURASIP J. Adv. Signal Process. 2008(1), 192984 (2008). doi:10.1155/2008/192984.
G Baruffa, P Micanti, F Frescura, Error protection and interleaving for wireless transmission of JPEG 2000 images and video. IEEE Trans. Image Process. 18(2), 346–356 (2009). doi:10.1109/TIP.2008.2008421.
C Mairal, M Agueh, in Proc. of 2nd Int. Conf. on Advan. in Multimedia (MMEDIA). Smooth and scalable wireless JPEG 2000 images and video streaming with dynamic bandwidth estimation (IARIAAthens, Greece, 2010), pp. 174–179. doi:10.1109/MMEDIA.2010.40.
M Murroni, A powerbased unequal error protection system for digital cinema broadcasting over wireless channels. Signal Process. Image Commun. 22(3), 331–339 (2007). doi:10.1016/j.image.2006.12.006.
MI Iqbal, HJ Zepernick, U Engelke, in Proc. of 2nd Int. Conf. on Signal Process. and Commun. Sys. (ICSPCS 2008). Error sensitivity analysis for wireless JPEG 2000 using perceptual quality metrics (IEEEGold Coast, Australia, 2008), pp. 1–9. doi:10.1109/ICSPCS.2008.4813665.
MI Iqbal, HJ Zepernick, U Engelke, in Proc. of 6th Int. Symp. on Wireless Commun. Sys. (ISWCS 2009). Perceptualbased quality assessment of error protection schemes for wireless JPEG 2000 (IEEESiena, Italy, 2009), pp. 348–352. doi:10.1109/ISWCS.2009.5285217.
W Xiang, A Clemence, J Leis, Y Wang, in Proc. of 7th Int. Conf. on Inf., Commun. and Signal Process. (ICICS 2009). Error resilience analysis of wireless image transmission using JPEG, JPEG 2000 and JPWL (IEEEMacau, China, 2009), pp. 1–6. doi:10.1109/ICICS.2009.5397742.
KM Alajel, W Xiang, J Leis, in Proc. of 4th Int. Conf. on Signal Proc. and Commun. Sys. (ICSPCS). Error resilience performance evaluation of H.264 Iframe and JPWL for wireless image transmission (IEEEGold Coast, Australia, 2010), pp. 1–7. doi:10.1109/ICSPCS.2010.5709766.
C Bergeron, B Gadat, C Poulliat, D Nicholson, in Proc. of 17th IEEE Int. Conf. on Image Process. (ICIP). Extrinsic distortion based sourcechannel allocation for wireless JPEG2000 transcoding systems (IEEEHong Kong, China, 2010), pp. 4469–4472. doi:10.1109/ICIP.2010.5651228.
J Chakareski, PA Chou, Application layer errorcorrection coding for ratedistortion optimized streaming to wireless clients. IEEE Trans. Commun. 52(10), 1675–1687 (2004). doi:10.1109/TCOMM.2004.836436.
PA Chou, Z Miao, Ratedistortion optimized streaming of packetized media. IEEE Trans. Multimedia. 8(2), 390–404 (2006). doi:10.1109/TMM.2005.864313.
ISO/IEC, Coding of audiovisual objectspart 10: Advanced video coding (ISO/IEC 1449610, Int. Standards Org./Int. Electrotech. Comm., Geneva, Switzerland, 2010).
P Cataldi, M Grangetto, T Tillo, E Magli, G Olmo, Slidingwindow raptor codes for efficient scalable wireless video broadcasting with unequal loss protection. IEEE Trans. Image Process. 19(6), 1491–1503 (2010). doi:10.1109/TIP.2010.2042985.
L Liang, P Salama, E Delp, Unequal error protection techniques based on WynerZiv coding. EURASIP J. Image Video Process.2009(1), 474689 (2009). doi:10.1155/2009/474689.
S Ahmad, R Hamzaoui, MM AlAkaidi, Unequal error protection using fountain codes with applications to video communication. IEEE Trans. Multimedia. 13(1), 92–101 (2011). doi:10.1109/TMM.2010.2093511.
J Lu, A Nosratinia, B Aazhang, in Proc. of Int. Conf. on Image Process. (ICIP 98), 2. Progressive sourcechannel coding of images over bursty error channels (IEEEChicago, Illinois, 1998), pp. 127–131. doi:10.1109/ICIP.1998.723331.
VM Stankovic, R Hamzaoui, Z Xiong, Realtime error protection of embedded codes for packet erasure and fading channels. IEEE Trans. Circuits Syst. Video Technol. 14(8), 1064–1072 (2004). doi:10.1109/TCSVT.2004.831964.
N Thomos, NV Boulgouris, MG Strintzis, Optimized transmission of JPEG2000 streams over wireless channels. IEEE Trans. Image Process. 15(1), 54–67 (2006). doi:10.1109/TIP.2005.860338.
Y Zhang, S Qin, B Li, Z He, Ratedistortion optimized unequal loss protection for video transmission over packet erasure channels. Signal Process. Image Commun.28(10), 1390–1404 (2013). doi:10.1016/j.image.2013.05.009.
N Ramzan, S Wan, E Izquierdo, Joint sourcechannel coding for waveletbased scalable video transmission using an adaptive Turbo code. EURASIP J. Image Video Process. 2007(1), 047517 (2007). doi:10.1155/2007/47517.
CP Ho, CJ Tsai, Contentadaptive packetization and streaming of wavelet video over IP networks. EURASIP J. Image Video Process.2007(1), 045201 (2007). doi:10.1155/2007/45201.
S Bezan, S Shirani, RD optimized, adaptive, errorresilient transmission of MJPEG2000coded video over multiple timevarying channels. EURASIP J. Adv. Signal Process. 2006(1), 079769 (2006). doi:10.1155/ASP/2006/79769.
C Schwartz, F da Silva Marques, M da Silva Pinho, in International Telecommunications Symposium (ITS). An unequal coding scheme for remote sensing systems based on CCSDS recommendations (IEEESao Paulo, Brazil, 2014), pp. 1–5. doi:10.1109/ITS.2014.6947971.
D Pascual Biosca, M Agueh, in Mobile Multimedia Communications. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 79, ed. by L Atzori, J Delgado, and Giusto D. Optimal interleaving for robust wireless JPEG 2000 images and video transmission (SpringerBerlin Heidelberg, 2012), pp. 217–226. doi:10.1007/978364230419419.
M Agueh, S Ataman, H Soude, in Fourth International Conference on Communications and Networking in China (ChinaCOM 2009). A low timeconsuming smart FEC rate allocation scheme for robust wireless JPEG 2000 images and video transmission (IEEEXi’an, China, 2009), pp. 1–5. doi:10.1109/CHINACOM.2009.5339854.
J Abot, C Olivier, C Perrine, Y Pousset, A link adaptation scheme optimized for wireless JPEG 2000 transmission over realistic MIMO systems. Signal Process. Image Commun.27(10), 1066–1078 (2012). doi:10.1016/j.image.2012.08.003.
F Fiorucci, G Baruffa, P Micanti, F Frescura, in IEEE International Conference on Multimedia and Expo (ICME). A realtime, DSPbased JPWL implementation for wireless High Definition video transmission (IEEEBarcelona, Spain, 2011), pp. 1–4. doi:d10.1109/ICME.2011.6012054.
MI Iqbal, HJ Zepernick, in International Symposium on Communications and Information Technologies (ISCIT). Error protection for wireless imaging: providing a tradeoff between performance and complexity (IEEETokyo, Japan, 2010), pp. 249–254. doi:10.1109/ISCIT.2010.5664847.
MI Iqbal, HJ Zepernick, A framework for error protection of region of interest coded images and videos. Signal Process. Image Commun.26(4–5), 236–249 (2011). doi:10.1016/j.image.2011.03.001.
S Bahmani, IV Bajic, A HajShirmohammadi, Joint decoding of unequally protected JPEG2000 bitstreams and ReedSolomon codes. IEEE Trans. Image Process. 19(10), 2693–2704 (2010). doi:10.1109/TIP.2010.2049529.
M Ouaret, F Dufaux, T Ebrahimi, Errorresilient scalable compression based on distributed video coding. Signal Process. Image Commun.24(6), 437–451 (2009). doi:10.1016/j.image.2009.02.011.
E Baccaglini, T Tillo, G Olmo, Image and video transmission: a comparison study of using unequal loss protection and multiple description coding. Multimedia Tools Appl.55(2), 247–259 (2011). doi:10.1007/s1104201005743.
Z Chen, M Xu, L Yin, J Lu, in International Conference on Wireless Communications and Signal Processing (WCSP). Unequal error protected JPEG 2000 broadcast scheme with progressive fountain codes (IEEENanjing, China, 2011), pp. 1–5. doi:10.1109/WCSP.2011.6096843.
T Nakachi, Y Tonomura, T Fujii, in 7th International Conference on Signal Processing and Communication Systems (ICSPCS). A conceptual foundation of NSCW transport design using an MMT standard (IEEEGold Coast, Australia, 2013), pp. 1–6. doi:10.1109/ICSPCS.2013.6723976.
G Baruffa, F Frescura, P Micanti, B Villarini, in Proc. of 19th IEEE Int. Conf. on Image Process. (ICIP 2012). An optimal method for searching UEP profiles in wireless JPEG 2000 video transmission (IEEEOrlando, FL, 2012), pp. 1645–1648. doi:10.1109/ICIP.2012.6467192.
L Pu, MW Marcellin, B Vasic, A Bilgin, in Proc. of IEEE Int. Conf. on Image Process. (ICIP 2005), 3. Unequal error protection and progressive decoding for JPEG2000 (IEEEGenoa, Italy, 2005), pp. 896–899. doi:10.1109/ICIP.2005.1530537.
S Feldmann, M Radimirsch, in IEEE Int. Symp. on Pers., Indoor and Mobile Radio Commun. (PIMRC 2002), 3. A novel approximation method for error rate curves in radio communication systems (IEEELisboa, Portugal, 2002), pp. 1003–1007. doi:10.1109/PIMRC.2002.1045178.
I Amonou, N Cammas, S Kervadec, S Pateux, Optimized ratedistortion extraction with quality layers in the scalable extension of H.264/AVC. IEEE Trans. Circuits Syst. Video Technol. 17(9), 1186–1193 (2007). doi:10.1109/TCSVT.2007.906870.
Xiph.org media. >https://media.xiph.org/video/derf/. Accessed 1 March 2016.
Kakadu v. 6.0. http://www.kakadusoftware.com. Accessed 1 March 2016.
D Taubman, M Marcellin, JPEG2000: Image Compression Fundamentals, Standards and Practice (Springer, New York, NY, 2002).
JG Proakis, M Salehi, Digital Communications: Fifth Edition (McGrawHill Education, Singapore, 2007).
V Sanchez, MK Mandal, in Proc. Int. Conf. on of Consumer Electronics (ICCE 2002). Robust transmission of JPEG 2000 images over noisy channels (IEEELos Angeles, CA, USA, 2002), pp. 80–81. doi:10.1109/ICCE.2002.1013935.
BA Banister, B Belzer, TR Fischer, Robust image transmission using JPEG2000 and turbocodes. IEEE Signal Process. Lett. 9(4), 117–119 (2002). doi:10.1109/97.1001646.
G Baruffa, F Frescura, Adaptive error protectioncoding for wireless transmission of motion JPEG 2000 Video. http://dante.diei.unipg.it/~baruffa/uep2015/. Accessed 1 March 2016.
J Lai, NB Mandayam, Performance of ReedSolomon codes for hybridARQ over Rayleigh fading channels under imperfect interleaving. IEEE Trans. Commun. 48(10), 1650–1659 (2000). doi:10.1109/26.871390.
Acknowledgements
The authors thank Paolo Micanti and Barbara Villarini for the help on theoretical aspects and simulations carried out for this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
GB and FF worked jointly on the development of the theoretical model. GB performed the simulations and drafted the manuscript, FF carried out the statistical analysis of video and helped to draft the manuscript. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Baruffa, G., Frescura, F. Adaptive error protection coding for wireless transmission of motion JPEG 2000 video. J Image Video Proc. 2016, 10 (2016). https://doi.org/10.1186/s136400160111z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s136400160111z
Keywords
 Motion JPEG 2000
 Wireless video transmission
 Lagrangian optimization
 Unequal error protection
 Forward error correction