Skip to main content

Modeling of SSIM-based end-to-end distortion for error-resilient video coding

Abstract

Conventional end-to-end distortion models for videos measure the overall distortion based on independent estimations of the source distortion and the channel distortion. However, they are not correlating well with the perceptual characteristics where there is a strong inter-relationship among the source distortion, the channel distortion, and the video content. As most compressed videos are represented to human users, perception-based end-to-end distortion model should be developed for error-resilient video coding. In this paper, we propose a structural similarity (SSIM)-based end-to-end distortion model to optimally estimate the content-dependent perceptual distortion due to quantization, error concealment, and error propagation. Experiments show that the proposed model brings a better visual quality for H.264/AVC video coding over packet-switched networks.

1 Introduction

Most video coding standards achieve high compression using transform coding and motion-compensated prediction, which creates a strong spatial-temporal dependency in compressed videos. Thus, transmitting highly compressed video streams over packet-switched networks may suffer from spatial-temporal error propagation and may lead to severe quality degradation at the decoder side [1]. To protect compressed videos from packet loss, error-resilient video coding becomes a crucial requirement. Given transmission conditions, such as bit rate and packet loss ratio, the target of error resilient video coding is to minimize the distortion at the receiver [2]:

min D s.t.R R T andρ
(1)

where D and R denote the distortion at the receiver and the bit rate, respectively. R T is the target bit rate and ρ is the packet loss ratio. Note that we assume the transmission conditions are available at the encoder throughout this paper. This can be either specified as part of the initial negotiations or adaptively calculated from information provided by the transmission protocol [3].

Assume packet containing video data is lost in the channel and the decoder performs error concealment. Clearly, the resulting reconstruction at the decoder is different from the reconstruction at the encoder and the difference will propagate to the following frames due to the prediction chain. Therefore, the key challenge of the error-resilient video coding is to estimate at the encoder the reconstruction error and error propagation of the decoder, which is useful to optimize the coding options to solve the above minimization problem.

A number of end-to-end distortion models (also known as joint source-channel distortion models) for video transmission over lossy channels have been proposed in the literature. In [4, 5], several low-complexity estimation models were presented for low error rate applications. For a more accurate distortion estimation model, the work in [2] developed a frame-level recursion distortion model, which relates to the channel-induced distortion due to bit errors. Another efficient approach is the well-known recursive optimal pixel estimation (ROPE) model [3] and its extensions [610], which estimate the overall distortion due to quantization, error concealment, and error propagation. Recently, several novel source-channel distortion models were developed for distributed video coding [11], generic multi-view video transmission [12], and error-resilient schemes based on forward error correction [13].

However, these models are derived in terms of mean squared error (MSE), which has been criticized for weak correlation with perceptual characteristics. As most compressed videos are presented to humans, it is meaningful to incorporate visual features into the error-resilient video coding to protect important visual information of compressed videos from packet loss. Thus, several region-of-interest (ROI)-based approaches were presented to better evaluate the visual quality [14, 15]. However, ROI-based approaches do not provide accurate distortion estimation, and ROI determination may be difficult for most videos, especially for videos with natural scenes. Therefore, it is expected that a perception-based end-to-end distortion model could provide a more general and accurate perceptual distortion estimation.

In [16], the structural similarity (SSIM)-based end-to-end distortion was predicted by several factors extracted from the encoder. Although the variation trend is very similar at the block level, the estimated SSIM cannot reach the peak points of the actual SSIM. In [17], a parametric model was proposed to accurately estimate the degradation of SSIM over error-prone networks, in which the content, encoding, and network parameters are considered. However, the encoding parameters only included the number of slices per frame and the GOP length. The proposed model cannot estimate the relative quality of a block given different coding modes. In our earlier work [18], we introduced a block-level SSIM-based distortion model into the error-resilient video coding to minimize the perceptual distortion. In [19], improved SSIM-based distortion model and Lagrange multiplier decision method are proposed for better coding performance. In [18] and [19], the expected SSIM scores were estimated by the expected decoded frames. Due to the nonlinear variation of SSIM, the estimated SSIM scores may be less accurate, especially at high bit rate.

In this paper, we develop an SSIM-based end-to-end distortion model to estimate the overall perceptual distortion for H.264/AVC coded video transmission over packet-switched networks. Unlike the traditional end-to-end distortion model, the perceptual quantization distortion and the perceptual error propagation distortion are dependent on the video content, which makes the end-to-end distortion become complex or difficult to estimate at the encoder. Therefore, this paper provides two major contributions: 1) a SSIM-based reconstruction quality model; 2) a SSIM-based error propagation model. Both models are useful to estimate the content-dependent perceptual distortion at the encoder. Our extensive experimental results demonstrate that the proposed end-to-end distortion model can bring visual quality improvement for H.264/AVC video coding over packet-switched networks. We would like to mention that the scheme presented in this paper is an enhanced approach based on our preliminary work in [20]. Different settings are considered in this paper, including additional descriptions of related works, technical and implementation details, and comparison experiment results to better evaluate the efficiency of the proposed scheme.

The rest of the paper is organized as follows. Section 2 states the problem and motivation. Section 3 describes the proposed SSIM-based end-to-end distortion model. Section 4 introduces the distortion model into the error-resilient video coding. Section 5 provides the simulation results and Section 6 concludes the paper.

2 Problem and motivation

For H.264/AVC coded video transmission over packet-switched networks, the general formulation of the widely used MSE-based end-to-end distortion can be defined as

D= 1 ρ D Q +ρ D C + 1 ρ D P _ f +ρ D P _ c
(2)

where ρ is the packet loss ratio. D is the estimated overall distortion. D Q denotes the source distortion due to the quantization. D c , D P_f , and D P_c represent the channel distortion due to the error concealment, error propagation from the reference frames, and error propagation from the concealment frames, respectively.

With such a model in Equation 2, the end-to-end distortion can be individually and independently estimated by the quantization distortion, error concealment distortion, and error propagation distortion. This model is appealing because it is easy to calculate and has clear physical meanings. However, since the perceptual distortion is dependent on the video content, the individual and independent objective distortion estimation does not correspond well with human perceptual characteristics. For instance, as shown in Figure 1, since ten compressed or lossy transmitted videos (Live video quality database [21, 22]) have different perceptual characteristics, a similar objective distortion may result in different levels of perceptual quantization distortion or transmission distortion. Therefore, we aim to propose a perception-based end-to-end distortion model for more accurate estimation of the overall perceptual distortion in the following section.

Figure 1
figure 1

Quality comparison. (a) PSNR results for compressed videos; (b) DMOS results for compressed videos; (c) PSNR results for lossy transmitted videos; (d) DMOS results for lossy transmitted videos.

3 SSIM-based end-to-end distortion model

To estimate the overall perceptual distortion of decoded videos, we adopt the SSIM index [23] as the perceptual distortion metric due to its best trade-off among simplicity and efficiency [24]. Three important perceptual components, luminance, contrast, and structure, are combined as an overall similarity measure. For two images x and y, the SSIM index is defined as follows:

SSIM x , y = l x , y c x , y s x , y = 2 μ x μ y + c 1 μ x 2 + μ y 2 + c 1 2 σ xy + c 2 σ x 2 + σ y 2 + c 2
(3)

where l(x,y), c(x,y), and s(x,y) represent the luminance, contrast and structure perceptual components, respectively. μ, σ2, and σ xy are the mean, variance, and cross covariance, respectively. c1 and c2 are used to avoid the instability when means or variances are close to zero.

Based on the perceptual distortion metric, we develop a novel end-to-end distortion model as follows. In Figure 2, b denotes the original block and b ~ is the corresponding reconstruction block at the decoder. r ^ and r ~ represent the prediction block of b at the encoder and at the decoder, respectively. e denotes the prediction residual and its reconstruction value is e ^ . If the block is received correctly, b ~ = r ~ + e ^ . When the block is lost, an error concealment technique is used to estimate the missing content. Let c ^ and c ~ represent the concealment block of b at the encoder and at the decoder, respectively. In this case, b ~ = c ~ . For a given packet loss ratio ρ, the general SSIM-based end-to-end distortion can be expressed as

Figure 2
figure 2

End-to-end distortion model in lossy transmission channel.

D SSIM b , b ~ = 1 ρ E 1 SSIM b , r ~ + e ^ + ρ E 1 SSIM b , c ~
(4)

with

E 1 SSIM b , r ~ + e ^ = 1 E SSIM b , r ~ + e ^ = 1 φ r E b , r ^ + e ^
(5)
E 1 SSIM b , c ~ = 1 E SSIM b , c ~ = 1 φ c SSIM b , c ^
(6)

where E { } is the expectation operator. φ is the error propagation factor. It indicates how the transmission errors from prediction block or concealment block influence the quality of current block. SSIM b , r ~ + e ^ and SSIM b , c ~ denote the quality of prediction coding and error concealment at the decoder, respectively. SSIM b , r ~ + e ^ and SSIM b , c ^ are the quality of prediction coding and error concealment at the encoder, respectively.

With this formula, the reconstruction quality SSIM b , r ^ + e ^ and error propagation factor φ are the key terms of the SSIM-based end-to-end distortion model. In the following section, we will make a development of the two terms based on content dependency.

3.1 Development of reconstruction quality model

In this section, we aim to estimate the content-dependent reconstruction quality SSIM b , r ^ + e ^ at the block level (the 4 × 4 transform and quantization unit is used throughout this paper). Since the accurate reconstruction quality can only be obtained after de-quantization, the proposed quality estimation reduces the computational complexity of de-quantization process for each candidate modes.

According to the SSIM index, the reconstruction quality is derived as

SSIM b , r ^ + e ^ =l b , r ^ + e ^ c b , r ^ + e ^ s b , r ^ + e ^
(7)

with

l b , r ^ + e ^ = 2 μ b μ r ^ + 2 μ b μ e ^ + c 1 μ b 2 + μ r ^ 2 + μ e ^ 2 + 2 μ r ^ μ e ^ + c 1
(8)
c b , r ^ + e ^ s b , r ^ + e ^ = 2 σ b r ^ + 2 σ b e ^ + c 2 σ b 2 + σ r ^ 2 + σ e ^ 2 + 2 σ r ^ e ^ + c 2
(9)

From Equations 8 and 9, we can see that the estimation of content-dependent reconstruction quality is converted to the estimation problem of three content-independent parameters: 1) the variance of reconstructed prediction residual; 2) the cross-covariance between the reconstructed prediction residual and current block; 3) the cross-covariance between the reconstructed prediction residual and prediction block.

It is reported that the DCT coefficients of prediction residual closely follows a zero-mean Laplacian distribution [25]. Based on this phenomenon, the work in [26] proved that the reconstruction distortion from the prediction residual can be estimated by the Laplacian parameter and the quantization step. Extending the derivation in [26] into pixel-domain, we establish the following two estimation models for above parameters:

σ e ^ 2 = M var α , Q P σ e 2 ,α= 2 / σ e 2
(10)
σ b e ^ = M cov β , Q P σ be ,β= 2 / σ be
(11)

where α and β denote the Laplacian parameters. QP is the quantization parameter in H.264/AVC. Mvar and Mcov indicate the scaling maps, which vary from 0 to 1.

The scaling maps Mvar and Mcov are modeled based on four video sequences [27]: ‘Crow_run’, ‘In_to_tree’, ‘Ducks_take_off’, and ‘Old_town_cross’, which have abundant and various structural information. Each sequence is coded as intra-frame (I frame) and inter-frame (P frame), respectively. To cover various reconstruction variances, 11 different QP values are tested, ranging from 15 to 45 uniformly with the step size of 3.

Firstly, we calculate the variance of initial and reconstructed prediction residual with different QP values. Secondly, we obtain the scaling curve by doing statistics analysis for each test QP. Finally, we interpolate the eleven scaling curves to establish the scaling map. Based on the simulations, the fitted scaling map Mvar and Mcov are shown in Figure 3, which can be constructed as look-up tables.

Figure 3
figure 3

The scaling maps. (a) For residual variance by intra-coding; (b) for residual variance by inter-coding; (c) for residual cross covariance by intra-coding; (d) for residual cross covariance by inter-coding.

To demonstrate the accuracy of the proposed reconstruction quality models, 250 frames of each sequence are coded with constant quantization parameters: 20, 25, 30, and 35, respectively. Table 1 shows the average mean absolute deviation (MAD) between the actual and estimated variance, cross covariance, and reconstruction quality. The first two terms denote the accuracy of the fitted models (10) and (11), respectively. The following three terms show the accuracy of the final estimated SSIM scores. It can be seen that the proposed models are valid to predict the reconstruction quality.

Table 1 MAD between actual and estimated scores

3.2 Development of error propagation model

The error propagation is the key component of the end-to-end distortion model. Different from the independent estimation in conventional MSE-based end-to-end distortion model, the perceptual error propagation depends on the source distortion or the concealment distortion. In this section, our primary goal is to develop the error propagation models to estimate the overall perceptual quality for given transmission errors of prediction block or concealment block.

The error propagation models are motivated by three observations. The first observation is related to the impact of error propagation on the three components of SSIM. Let Qatt denote the quality attenuation of a given block b due to the error propagation.

Q att b =SSIM b , b ~ /SSIM b , b ^
(12)

To illustrate the fact, Qatt is measured by three different similarity metrics: 1) luminance component of SSIM, 2) the contrast and structure components of SSIM, 3) SSIM index. Figure 4 illustrates an example where the quality attenuation is calculated by Equation 12 for each frame suffering from random transmission errors. As shown, the contrast and structure components have the similar changes with SSIM. On the other hand, the impact of error propagation on the luminance component is limited.

Figure 4
figure 4

The quality attenuation due to transmission errors.

The second observation is made on the relationship f p between the quality attenuation of block b and the quality attenuation of its compensation block p. p indicates the compensation block of b, which may contain the transmission errors. Thus, p can be used to represent the prediction block r or concealment block c of b.

f p = Q att b p /Q p att = SSIM b , p ~ SSIM b , p ^ / SSIM p , p ~ SSIM p , p ^
(13)

Usually, the quality attenuation of block b correlates with the quality attenuation of its compensation block p. In addition, the structural similarity between current block and its compensation block may be another factor in estimation of f p . x p is defined as follows to explore the effect of quality attenuation and structural similarity on f p .

x p = Q att p SSIM b , p ^
(14)

The simulation results are carried out on the same four sequences as Section 3.1. Each sequence is coded with four different QPs: 15, 25, 35, and 45. One I frame followed by all inter frames (IPPP). To cover various error propagation, each block is tested with random transmission errors propagated from prediction block and concealment block, respectively. Note that the prediction residuals of block b are not included in this observation.

Figure 5a displays the simulation results. The mean of each test frame is recorded as one blue sample, and the fitted curve of f p (x p ) is shown as the red line. It shows that the quality attenuation of block, in terms of SSIM, is related with the quality attenuation of its compensation block and the structural similarity. Moreover, it demonstrates that less quality attenuation of compensation block or less structural similarity between current block and compensation block leads to less quality attenuation.

Figure 5
figure 5

Quality attenuation due to error propagation. (a) The relationship between f p and x p ; (b) the relationship between f e and x e .

The third observation is related to the impact of prediction residual on the decoded quality. Let Qenh denote the quality enhancement of a given block b due to its prediction residuals e. f e represents the relationship between the quality attenuation of block b with and without the prediction residual.

Q enh b p , e =SSIM b , p + e /SSIM b , p
(15)
f e = Q att b p , e /Q b p att = SSIM b , p ~ + e ^ SSIM b , p ^ + e ^ / SSIM b , p ~ SSIM b , p ^
(16)

In this observation, the quality attenuation of block b may link with the quality attenuation of its compensation block p and the quality enhancement of its prediction residuals e. x e is defined as follows to explore the effect of quality attenuation and quality enhancement on f p :

x e = Q att b p / Q enh b p ^ , e ^
(17)

The simulation set-up is the same as that in the second observation. In this observation, each block including the prediction residuals is tested with random transmission errors propagated from its prediction blocks. Figure 5b shows the simulation results. The mean of each test frame is recorded as one blue sample, and the fitted curve of f e (x e ) is shown as the red line. The results show that a larger prediction residual leads to a better decoded quality of current block, and the influence of error propagation from prediction blocks will be smaller.

According to Equations 13 and 16, the effective approximation of SSIM b , r ~ + e ^ and SSIM b , c ~ can be developed as

SSIM b , r ~ + e ^ f e x e SSIM b , r ^ + e ^ SSIM b , r ~ SSIM b , r ^ f e x e SSIM b , r ^ + e ^ f p x r SSIM r , r ~ SSIM r , r ^ Q att r f e x e f p x r SSIM b , r ^ + e ^
(18)
SSIM b , c ~ f p x c SSIM b , c ^ SSIM c , c ~ SSIM c , c ^ Q att c f p x c SSIM b , c ^
(19)

Where SSIM r , r ~ and SSIM c , c ~ represent the end-to-end distortion of prediction block and concealment block, respectively. The approximations used in the equations represent the estimation of f p and f e .

Based on Equations 18 and 19, the error propagation factors in Equations 5 and 6 can be obtained by

φ r = Q att r f p x r f e x e
(20)
φ c = Q att c f p x c
(21)

To better demonstrate the accuracy of the proposed error propagation models, Table 2 shows the average MAD between the actual and estimated SSIM of the four test sequences. It indicates that video quality at the decoder, in terms of SSIM, can be approximately calculated by the fitted models (20) and (21) at the encoder.

Table 2 MAD between actual and estimated SSIM

4 Error-resilient video coding

It is widely recognized that intra-update is an effective approach for error-resilient video coding because decoding of an intra-coding block does not require information from its previous frames. To better evaluate the performance of our proposed model, we incorporate the proposed SSIM-based end-to-end distortion model into the mode selection to improve the RD performance over packet-switched networks. Thus, the optimization problem in Equation 1 can be converted to the problem of mode selection between intra-coding and inter-coding as follows:

min J mode = min D SSIM mode | ρ , QP + λ SSIM R mode | QP
(22)

where DSSIM and R denote the end-to-end distortion and bit-rate of current coding block. ρ is the packet loss ratio. mode denotes the coding mode. QP is the quantization parameter, which is determined by the target bit rate. According to [8, 28, 29], the Lagrange multiplier λSSIM is determined as follows:

λ SSIM = 1 ρ D SSIM ¯ f R T
(23)

where D SSIM ¯ denotes the average distortion of previous coding units. f (R T ) is an look-up experimental function [20], which is inversely proportional to the target bit rate R T .

5 Experimental results

5.1 Evaluation of end-to-end distortion model

To validate the effectiveness of our proposed models, the end-to-end distortion models proposed in [16] and [19] are used as comparison. In [17], the SSIM-based estimation model is not suitable for a block given different coding modes. Thus, we did not compare the performance with the proposed model in [17].

We performed the simulation with ten LIVE sequences [21, 22]. First 100 frames of each sequence are encoded by four different QPs: 24, 28, 32, and 36, respectively. Random packet losses (10% and 20%) are used. Each experiment is repeated 200 times and the results are averaged. Table 3 shows the average MAD between the actual and estimated end-to-end distortion, in terms of SSIM. It is obvious that our proposed model achieves better performance for most sequences.

Table 3 MAD between actual and estimated end-to-end distortion

5.2 Evaluation of RD performance

To validate the RD performance of error-resilient video coding, the MSE-based error resilient video coding scheme (MSE-ER) and the SSIM-based error-resilient video coding scheme (SSIM-ER) in [19] are used as the comparison schemes. For MSE-ER, the end-to-end distortion is estimated by the ROPE model [3], which is well studied and regarded as an advanced MSE-based distortion model, and the Lagrange multiplier is calculated by the model presented in [8].

We evaluate the performance on the platform of JM 15.1 [30], in which the SSIM index is adopted as an optimal quality metric. Five CIF sequences [27], five 640 × 360 sequences [27] and ten LIVE sequences [21, 22] are tested in the experiments. The first frame is coded as I frame and the rest are coded as P frames. The rate control is turned on. Table 4 shows the detail information of target bit rates for test sequences. Corresponding to four different target bit rates, the initial QP is equal to 24, 28, 32, and 36 for the first I frame and P frame. Frames are partitioned into one or more slices (each slice contains no more than 1,200 bytes), and the slices are organized in packets for transmission where each slice is packed into one packet. The test sequences are encoded with 5%, 10%, and 20% random packet loss ratio, respectively. For each packet loss ratio, four different target bit rates are tested in the experiments. Each experiment is repeated by 200 times, and the results are averaged, which are used as the final result.

Table 4 Detail information of the test sequences

Figure 6 illustrates the results of Rate-SSIM performance comparison for four test sequences. Moreover, we choose the MSE-ER scheme as the baseline and calculate all the simulation results of average SSIM gain with different bit rates and packet loss ratios, which are tabulated in Table 5.

Figure 6
figure 6

Comparison of Rate-SSIM performance. (a) ‘Ducks_take_off’ by with 5% packet loss. (b) ‘Football’ by with 10% packet loss. (c) ‘Park run’ by with 10% packet loss. (d) ‘Shields’ by with 20% packet loss.

Table 5 Simulation results of SSIM again with different packet loss ratios and bit rates

It can be seen that the proposed model yields consistent gains over the MSE-ER for all sequences except ‘Pedestrian area’. Our proposed scheme achieves an average SSIM gain of 0.0218 or equivalently a bit rate saving of 15.7%. Comparing to the SSIM-ER [19], our proposed scheme has better performance of most sequences and obtains an average gain of 0.0068. For some sequences, such as ‘Mobile calendar’ and ‘Pedestrian area’, although our proposed scheme cannot achieve the best performance, the quality of the two SSIM-ER schemes is similar.

5.3 Evaluation of subjective quality

Finally, we show the visual quality comparison of reconstructed images by different error-resilient video coding schemes. Figure 7 compares the subjective quality of the 25th frame of ‘Stefan’ encoded at 1.7 Mbps with 10% packet loss. Figure 8 shows the comparison on visual quality of the 38th frame of ‘In_to_tree’ encoded at 1 Mbps with 20% packet loss. Figure 9 represents the visual quality of the 29th frame of ‘Station’ encoded at 0.85 Mbps with 5% packet loss.

Figure 7
figure 7

Subjective quality comparison of one CIF sequence. (a) Original frame of ‘Stefan’; (b) ‘Stefan’ by ‘MSE-ER’ (SSIM 0.913); (c) ‘Stefan’ by SSIM-ER [19] (SSIM 0.917); (d) ‘Stefan’ by our proposed (SSIM 0.929).

Figure 8
figure 8

Subjective quality comparison of one 640 × 360 sequence. (a) Original frame of ‘In_to_tree’; (b) ‘In_to_tree’ by MSE-ER (SSIM 0.833); (c) ‘In_to_tree’ by SSIM-ER [19] (SSIM 0.860); (d) ‘In_to_tree’ by our proposed (SSIM 0.873).

Figure 9
figure 9

Subjective quality comparison of one LIVE sequence. (a) Original frame of ‘Station’; (b) ‘Station’ by MSE-ER (SSIM 0.826); (c) ‘Station’ by SSIM-ER [19] (SSIM 0.851); (d) ‘Station’ by our proposed (SSIM 0.869).

For the similar bit rate, the reconstructed images based on the SSIM-based error-resilient video coding can provide a better visual quality due to more image details being protected from transmission errors. On the other side, the reconstructed images based on the conventional MSE-based error-resilient video coding suffer from larger perceptual distortion. Compared to SSIM-ER [19], our proposed scheme obtains similar or better visual quality.

5.4 Evaluation of coding complexity

Our proposed SSIM-based error resilient video coding scheme improves the RD performance for lossy transmission over packet-switched network. However, the computational complexity of codec is increased due to SSIM-based distortion calculation and mode selection.

We compare the coding efficiency with different bit rates and packet loss ratios. Table 6 shows the average encoding time ratio of SSIM-ER [19] and proposed scheme to MSE-ER, respectively. The experiments are performed on a laptop with 3.4 GHz Intel Core i7-3770 CPU and 4G memory running on Microsoft Windows 7 professional platform. Each experiment is repeated 100 times and the results are averaged.

Table 6 Average encoding time ratio of SSIM-ER[19] and proposed scheme to MSE-ER, respectively

Comparing to MSE-ER, the average computation of SSIM-ER [19] and proposed scheme increase by 3.65% and 5.58%, respectively. In addition, different sequences have inconsistent degree of encoding time, as can be seen in Table 6. That is because the computation complexity is also affected by the characteristics of video content and the results of mode selection. The SSIM-based schemes may take more time to code the image details, such as ‘Sunflower’ and ‘Mobile’.

6 Conclusions

In this paper, we propose an SSIM-based end-to-end distortion model for H.264/AVC video coding over packet-switched networks. This model is useful to estimate the content-dependent perceptual distortion of quantization, error concealment, and error propagation. We integrate the proposed end-to-end distortion model into the error-resilient video coding framework to optimally select the coding mode. Simulation results show that the proposed scheme outperforms the state-of-the-art schemes in terms of SSIM.

References

  1. Wenger S: H.264/AVC over IP. IEEE Transact Circ Syst Video Technol 2003, 13: 645-656. 10.1109/TCSVT.2003.814966

    Article  Google Scholar 

  2. He ZH, Cai JF, Chen CW: Joint source channel rate-distortion analysis for adaptive mode selection and rate control in wireless video coding. IEEE Transact Circ Syst Video Technol 2002, 12: 511-523. 10.1109/TCSVT.2002.800313

    Article  Google Scholar 

  3. Zhang R, Regunathan SL, Rose K: Video coding with optimal inter/intra-mode switching for packet loss resilience. IEEE J Selected Areas Commun 2000, 18: 966-976.

    Article  Google Scholar 

  4. Yang XK, Zhu C, Li ZG, Lin X, Feng GN, Wu S, Ling N: Unequal loss protection for robust transmission of motion compensated video over the Internet. Signal Process. Image Commun. 2003, 18: 157-167. 10.1016/S0923-5965(02)00128-5

    Article  Google Scholar 

  5. Zhang CY, Yang H, Yu SY, Yang XK: GOP-level transmission distortion modeling for mobile streaming video. Signal Process. Image Commun. 2008, 23: 116-126. 10.1016/j.image.2007.12.002

    Article  Google Scholar 

  6. Wang Y, Wu ZY, Boyce JM: Modeling of transmission-loss-induced distortion in decoded video. IEEE Transact Circ Syst Video Technol 2006, 16: 716-732.

    Article  Google Scholar 

  7. Yang H, Rose K: Advances in recursive per-pixel end-to-end distortion estimation for robust video coding in H.264/AVC. IEEE Transact Circ Syst Video Technol 2007, 17: 845-856.

    Article  Google Scholar 

  8. Zhang Y, Gao W, Lu Y, Huang Q, Zhao D: Joint source-channel rate-distortion optimization for H.264 video coding over error-prone networks. IEEE Transact Multimed 2007, 9: 445-454.

    Article  Google Scholar 

  9. Yang H, Rose K: Optimizing motion compensated prediction for error resilient video coding. EURASIP J Image Video Process 2010, 19: 108-118.

    Article  MathSciNet  Google Scholar 

  10. Xiao JM, Tillo T, Lin CY, Zhao Y: Joint redundant motion vector and intra macroblock refreshment for video transmission. EURASIP J Image Video Process 2011., 12: doi: 10.1186/1687-5281-2011-12

    Google Scholar 

  11. Zhang YX, Zhu C, Yap KH: A joint source-channel video coding scheme based on distributed source coding. IEEE Transact Multimed 2008, 10: 1648-1656.

    Article  Google Scholar 

  12. Zhou Y, Hou CP, Xiang W, Wu F: Channel distortion modeling for multi-view video transmission over packet-switched networks. IEEE Transact Circ Syst Video Technol 2011, 21: 1679-1692.

    Article  Google Scholar 

  13. Xiao JM, Tillo T, Lin CY, Zhao Y: Dynamic sub-GOP forward error correction code for real-time video applications. IEEE Transact Multimed 2012, 14: 1298-1308.

    Article  Google Scholar 

  14. Xue Z, Loo KK, Cosmas J, Tun M, Yip PY: Error-resilient scheme for wavelet video coding using automatic ROI detection and Wyner-Ziv coding over packet erasure channel. IEEE Transact Broadcast 2010, 56: 481-493.

    Article  Google Scholar 

  15. Dissanayake MB, Worrall S, Fernando WAC: Error resilience for multi-view video using redundant macroblock coding. In Proceedings of the IEEE International Conference on Industrial Information Systems (ICIIS). University of Peradeniya, Kandy; 2011:472-476.

    Google Scholar 

  16. Wang YX, Zhang Y, Lu R, Cosman PC: SSIM-Based End-to-End Distortion Modeling for H.264 Video Coding. In Proceedings of the Pacific-Rim Conference on Multimedia (PCM). Singapore; 2012:117-128.

    Google Scholar 

  17. Kwon YJ, Lee J-S: Parametric estimation of structural similarity degradation for video transmission over error-prone networks. Electron. Lett. 2013, 49: 1147-1148. 10.1049/el.2013.1951

    Article  Google Scholar 

  18. Zhang L, Peng Q, Wu X: SSIM-based Error-resilient video coding over packet-switched. In Proceedings of the Pacific-Rim Conference on Multimedia (PCM). Singapore; 2012:263-272.

    Google Scholar 

  19. Zhao PH, Liu YW, Liu JX, Li S, Yao RX: SSIM-based error-resilient rate-distortion optimization of H.264/AVC video coding for wireless streaming. Signal Process. Image Commun. 2014, 29: 303-315. 10.1016/j.image.2013.12.004

    Article  Google Scholar 

  20. Zhang L, Peng Q, Wu X: SSIM-based end-to-end distortion model for error resilient video coding over packet-switched networks. In Proceedings of the International Conference on Multimedia Modeling (MMM). Huangshan; 2013:307-317.

    Google Scholar 

  21. Seshadrinathan K, Soundararajan R, Bovik AC, Cormack LK: Study of subjective and objective quality assessment of video. IEEE Transact Image Process 2010, 19: 1427-1441.

    Article  MathSciNet  Google Scholar 

  22. Live Video Quality Database. 2012. http://www.utexas.edu/ece/research/live/vqdatabase/

  23. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP: Image quality assessment: from error visibility to structural similarity. IEEE Transact Image Process 2004, 13: 600-612. 10.1109/TIP.2003.819861

    Article  Google Scholar 

  24. Lin WS, Kuo CCJ: Perceptual visual quality metrics: a survey. J Visual Commun Image Represent 2011, 22: 297-312. 10.1016/j.jvcir.2011.01.005

    Article  Google Scholar 

  25. Lam E, Goodman J: A mathematic analysis of the DCT coefficient distribution for images. IEEE Transact Image Process 2000, 9: 1661-1666. 10.1109/83.869177

    Article  MATH  Google Scholar 

  26. Yang TW, Zhu C, Fan XJ, Peng Q: Source distortion temporal propagation model for motion compensated video coding optimization. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME). Melbourne; 2012:85-90.

    Google Scholar 

  27. Xiph.org Video Test Media. 2010. http://media.xiph.org/video/derf/

  28. Ou TS, Huang YH, Chen HH: SSIM-based perceptual rate control for video coding. IEEE Transact Circ Syst Video Technol 2011, 21: 682-691.

    Article  Google Scholar 

  29. Wiegand T, Girod B: Lagrange multiplier selection in hybrid video coder control. In Proceedings of the IEEE International Conference on Image Processing (ICIP). Thessaloniki, Thessaloniki; 2001:542-545.

    Google Scholar 

  30. JVT Reference Software. 2011. http://iphome.hhi.de/suehring/tml/

Download references

Acknowledgements

The work described in this paper was supported by the National Natural Science Foundation of China (No. 60972111, 61036008, 61071184, 61373121), Research Funds for the Doctoral Program of Higher Education of China (No. 20100184120009, 20120184110001), Program for Sichuan Provincial Science Fund for Distinguished Young Scholars (No. 2012JQ0029, 13QNJJ0149), the Fundamental Research Funds for the Central Universities (Project no. SWJTU09CX032, SWJTU10CX08, SWJTU11ZT08), and Open Project Program of the National Laboratory of Pattern Recognition (NLPR).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Zhang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Authors’ original file for figure 21

Authors’ original file for figure 22

Authors’ original file for figure 23

Authors’ original file for figure 24

Authors’ original file for figure 25

Authors’ original file for figure 26

Authors’ original file for figure 27

Authors’ original file for figure 28

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Peng, Q., Zhang, L., Wu, X. et al. Modeling of SSIM-based end-to-end distortion for error-resilient video coding. J Image Video Proc 2014, 45 (2014). https://doi.org/10.1186/1687-5281-2014-45

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-5281-2014-45

Keywords