Open Access

Error concealment algorithm using inter-view correlation for multi-view video

  • Yuan-Kai Kuan1,
  • Gwo-Long Li2,
  • Mei-Juan Chen1Email author,
  • Kuang-Han Tai1 and
  • Pin-Cheng Huang1
EURASIP Journal on Image and Video Processing20142014:38

https://doi.org/10.1186/1687-5281-2014-38

Received: 11 March 2014

Accepted: 13 June 2014

Published: 2 August 2014

Abstract

This paper proposes an error concealment algorithm for whole frame loss for multi-view video decoding. In our proposal, the relationship between motion vectors and disparity vectors is exploited first. Based on the parallelogram-like motion relationship, the motion vectors of error frames can be indirectly derived by projecting the disparity vectors from the counterpart view. In addition, to further improve the concealing results, a joint sum of the absolute difference (SAD) minimization approach is also proposed to find the block for the purpose of concealing the current error block by jointly considering motion vectors and disparity vectors. Experimental results show that our proposed algorithm provides better video quality than previous work and reduces error propagation.

Keywords

Error concealment Multi-view MVC Motion vector Disparity vector

1. Introduction

As multimedia technology has advanced in recent years, the applications of three-dimensional (3D) television and free viewpoint video (FVV) have become more attractive. To support multi-view video coding, the multi-view video coding standard has been proposed [1, 2] based on the motion-compensated prediction (MCP) technology adopted in H.264/AVC [3, 4] by incorporating the disparity-compensated prediction (DCP) technology as shown in Figure 1, to eliminate inter-view redundancy.
Figure 1

The prediction structure of two views.

In the error-prone network environment, packet errors or packet loss may occur very frequently due to the unpredictable interruption of noise sources, which leads to the decline of the received video quality as shown in Figure 2. Therefore, error recovering mechanisms have become an important research issue. To deal with the problem in multi-view applications, many studies have been proposed. In general, error recovery can be undertaken by two approaches called error resilience [59] and error concealment [1018]. For multi-view error concealment, study [17] uses the intra-view difference, inter-view correlation, and difference of the inter-view disparity vector projections on the neighboring views to conceal the error frames. However, this requires complex computations in terms of the temporal change detection, disparity estimation, and frame difference projection which results in difficulties for real-time applications. Study [18] compares the sum of the absolute difference (SAD) between the previous two frames and the SAD between adjacent views of the previous frame to achieve error concealment. However, useful information regarding disparity vectors has not been considered to help with the error concealment process. In [17], the authors prove that the disparity vectors could significantly improve the error concealment results.
Figure 2

Illustration of error propagation in a multi-view application.

To deal with error problem for multi-view video coding, we propose a whole frame error concealment algorithm which applies a predictive compensation approach as well as considering the inter-view correlation to conceal the error frame of the right view. By using the disparity vectors (DVs) in the previous frame as the reference prediction DVs, the motion vectors (MVs) inside the block, referred by the reference DVs, are collected to be the candidates for our error concealment process. Finally, the candidate MV with the smallest joint SAD is chosen as the best MV to conceal the error block, once the candidate MVs have been successfully collected.

The rest of this paper is organized as follows. In Section 2, the proposed algorithm is described in detail. Section 3 shows some simulation results to demonstrate the efficiency of our proposed error concealment algorithm. The conclusion is provided in Section 4.

2. Proposed algorithm

For the single-view error concealment approach, the error concealment algorithms are only considered using the information from the spatial and temporal domains. However, since we can have the information between coding views in the multi-view video coding, we can consider that the relationship between views achieves better error concealment results compared to the single-view error concealment. Therefore, we will first observe the relationship between views and propose our error concealment algorithm based on the observation.

2.1 Observation of multi-view characteristics

To create multi-view video sources, the cameras are usually placed along a horizontal line to capture the scene at the same time. In this case, the motion vectors between different views are very similar to each other due to the identical capturing target from the perspective of the time axis. However, when observing the target from the perspective of the view axis, we can observe that the distance between the placement of the cameras will cause the appearance of objects in the scene. Therefore, the inter-view disparity vectors are usually used to describe the object relationship between views. Figure 3 gives an example to illustrate the movement between frames and views.
Figure 3

Illustration of real sequence with multi-views.

If we discover the multi-view sequences, we can investigate the following properties. First, for the quiescent regions which have almost zero motion behavior, the relationship between frames in single view is higher than that between views. Second, for the high-motion regions, the relationship between views is much higher than that between frames. Based on the above observation, study [17] proposes a parallelogram-like motion relationship to describe the correlation between motion vectors and disparity vectors as shown in Figure 4. From this figure, we can find that if an object has been moved from frame (f-1) to frame (f) in one view, we can also observe the same movement in another view. Similarly, if we obtain certain disparity vectors from frame (f-1), we can obtain the similar disparity vectors from frame (f) as well.
Figure 4

The in-view and cross-view parallelogram-like motion relationship (DV R,f-1 ≈ DV R,f , MV R,f ≈ MV L,f ).

2.2 Proposed error concealment algorithm

From the above sub-section, we observe that the motion vectors between views and disparity vectors between frames have a high degree of similarity and a close relationship. The proposed error concealment algorithm is based on this observation. Figure 5 shows the flowchart of the proposed algorithm. First, a DV set is reconstructed according to the extended window (EW). Once the EW has been decided, we check if there is any disparity vector within EW. If there is, the proposed DV-based error concealment algorithm will be applied for dealing with the error recover problem. Otherwise, the proposed MV-based error concealment algorithm will be used. The details of the proposed algorithm are described as follows:
Figure 5

Flowchart of the proposed error concealment algorithm.

  1. 1.

    EW construction

     
In the proposed algorithm, the block size of B is adopted to conceal the erroneous frames. However, using 16 or 8 for B will obtain better concealment results since selecting 4 for B would result in a broken frame. After deciding on the block size, we extend B pixels all around the corresponding block in the previous frame to form a 3B × 3B-size EW as shown in Figure 6. The derivation process of EW can be expressed as follows:
Figure 6

Schematic diagram of the extended window.

EW = DV R , f 1 i | 0 i < N ; DV R , f 1 i covered by 3 B × 3 B
(1)
  1. 2.

    DV-based error concealment

     
If the EW contains any DV, we will calculate the area covered by each disparity vector in the EW and check whether any covered area has exceeded a predefined threshold. In our proposed algorithm, the default of the threshold is set to half of the EW area. If all of the covered areas pointed to by DVs in the EW are less than a predefined threshold TH, the error concealment algorithm will switch to the MV-based error concealment. Otherwise, the covered area of each DV inside the EW will be calculated and the DV with the biggest area size in the EW will be selected to conceal the error block.
  1. 3.

    MV-based error concealment

     
  2. (a)

    Reconstruction of new extended window

     
The proposed MV-based error concealment algorithm will be executed by two conditions. The first condition is the switching from DV-based error concealment while the second condition is the empty EW. Therefore, based on the condition, a new extended window (NEW) will be reconstructed as follows:
NEW = DV R , f 1 i | 0 i < N ; DV R , f 1 i covered by W × H , if EW is empty EW , if switched from DV based EC
(2)
where W and H mean the width and height, respectively, of the entire frame. In other words, if the MV-based error concealment process is trigged by the empty EW, the NEW will be constructed by all DVs in the entire frame. Otherwise, the NEW will be the same as the EW.
  1. (b)

    MV derivation process

     
Once the NEW has been constructed successfully, the DVs inside NEW will be considered to select the motion vectors from the left view. To derive the motion vectors corresponding to all DVs in NEW, the DVs inside NEW will be used to be projected onto the left view with a B × B window called a covered window (CW) as shown in Figure 7. After the DV projection, we will face the problem that the CW would cover more than one motion vector as shown in Figure 8. Therefore, a simple mechanism that the motion vector with the largest covered area by CW will be selected as the final motion vector in the motion vector derivation process. The motion vectors can be selected as follows:
Figure 7

Relationship between DVs and CWs.

Figure 8

Illustration of motion vectors covered by CW.

MV L , f i = argmax 0 k < N Area MV k ,
(3)
where Area(.) is the function of the area calculation according to the specific target.
  1. (c)

    SAD calculation according to selected MV

     
Based on the parallelogram-like motion relationship between inter-frame and inter-view correlation as shown in Figure 4, we can observe that the DV R , f 1 n will be very similar to the DV R , f n and the MV L , f n will be very similar to the MV R , f n also. Therefore, when the f th frame of the right view has an error occurring, the MV obtained from the corresponding block in the left view shifted by the DV will be very similar to the original MV of the error frame if the corresponding DV in the previous frame is correct. Therefore, the SADs between B1 and B2 as shown in Figure 9 are calculated for all MVs covered by CW to determine the block for concealing the current erroneous block. However, the situation might be faced when the block has been shifted from the wrong DV and the luminance component difference between blocks pointed to by the wrong MVs is unnoticeable. To solve this problem, we further consider SADs between the left and right views in the previous frame (FL,f-1). The step-by-step block selecting procedure for computing SADs is described below.
Figure 9

Illustration of block relationship for computing SAD.

Step 1: The disparity vector DV R , f 1 n of erroneous block B c has been selected and projected onto the left view to obtain the block B1 pointed to by DV R , f 1 n .

Step 2: The motion vector MV L , f n with the largest area coverage by B1 will be selected and projected onto the previous frame of the right view to obtain B2.

Step 3: The corresponding block B3 pointed by DV R , f 1 n from B2 will be used to calculate the SAD between B2 and B3.

Step 4: Finally, the motion vector with minimum joint SADs will be derived by the following equations to conceal the error block B c :
MV L , f i , j = arg min MV L , f n MV L , f a = 0 B - 1 b = 0 B - 1 F R , f 1 B × i + MV L , f , x n + a , B × j + MV Lf , y n + b - F L , f ( B × i + DV R , f 1 , x n + a , B × j + DV R , f 1 , y n + i + | F R , f 1 ( B × i + MV L , f , x n + a , B × j + MV L , f , y n + - F L , f 1 ( B × i + MV L , f , x n + DV R , f 1 , x n + a , B × j + MV L , f , y n + DV R , f 1 , y n + b ) |
(4)

The notations of Equation 4 are listed as follows:

  • i and j, the horizontal and vertical indexes of the B × B block in a frame

  • a and b, the horizontal and vertical indexes of the pixel inside the block

  • FR,f, the lost frame of the right view

  • FR,f-1, the previous frame of the lost frame in the right view

  • FL,f, the current frame of the left view

  • FL,f-1, the previous frame of the lost frame in the left view

  • DV R , f 1 , x n
    , the horizontal component of the n th DV in the block of the previous frame of the right view
  • DV R , f 1 , y n
    , the vertical component of the n th DV in the block of the previous frame of the right view
  • MV L , f , x n , the horizontal component of the n th MV in the block of the current frame of the left view

  • MV L , f , y n , the vertical component of the n th MV in the block of the current frame of the left view

By jointly considering the SADs between views and frames, the concealing results can be further improved.

3. Simulation results

In this section, several simulation results are given to demonstrate the efficiency of our proposed MVC error concealment algorithm. The test sequences we used for simulation are Ballroom (640 × 480), Exit (640 × 480), Flamenco (640 × 480), Race1 (640 × 480), AkkoKayo (640 × 480), and Vassar (640 × 480). In our simulation, we assume that only the right view has the whole frame error while the left view has not. Study [18] is adopted for comparison in this paper, but we have made some modifications for [18] in order to allow the algorithm of [18] to be able to support whole frame loss error concealment. The simulation settings are summarized in Table 1, in which the packet loss rate (PLR) is simulated by randomly dropping a certain number of frames. For example, the 5% PLR is simulated by randomly dropping 5 frames out of 100 frames.
Table 1

Simulation parameters

Parameter

Value

View

2

Reference software

JMVC8.5

Processor

Intel Core i7 870 2.93 GHz

GOP structure

IPPP…

Intra-refresh

Only the first frame

Frame rate

25

Frame number

100

Reference frame number

2

Coding order

0 → 1

QP

32

PLR

5%, 10%, 15%, 20%

Tables 2 and 3 tabulate the peak signal-to-noise ratio (PSNR) comparison for our proposed algorithm with other methods under different packet error rate conditions for entire frame and error frame only cases, respectively. Frame Copy (FC), Motion Copy (MC), and the algorithm of [18] are compared. In these tables, the ΔPSNR is calculated by the PSNR values of our proposal minus the PSNR values of [18] while B is set to 8, which means that the basic error concealing block size is 8. From these tables, we can observe that our proposed algorithm outperforms other methods. Quantitatively, our proposed algorithm can achieve about 4-dB PSNR improvement compared to [18] for the high-motion sequence Race1 under the 5% packet error rate condition. However, for other sequences such as Exit and Vassar, the PSNR improvement is less significant. This situation can be explained as follows. From [18], it can be found that the MB pixels at the same spatial position from the temporal and inter-view directions are evaluated. In other words, [18] does not take the motion of the frame into account. This mechanism could be able to obtain good concealment results for low-motion sequences. However, since our proposal takes both motion vector and disparity motion into account, the proposal can obtain better concealment results for high-motion sequences. On average, our proposed algorithm can receive 1.326- and 1.421-dB PSNR improvement compared to [18] for entire frame and error frame only cases, respectively.
Table 2

PSNR comparison of our proposed algorithm with other methods for entire frames ( B= 8)

 

Sequences

Ballroom

Vassar

Race1

Exit

AkkoKayo

Flamenco

Average

Error free

35.499

34.957

35.820

37.214

36.930

38.448

36.478

5%

FC

29.077

34.396

22.635

33.617

28.069

29.673

29.578

MC

29.323

34.494

23.885

33.721

29.610

29.818

30.142

[18]

29.292

34.388

22.780

33.614

28.184

29.743

29.667

Proposed

30.241

34.541

26.926

33.608

31.232

30.239

31.131

ΔPSNR

0.949

0.153

4.146

-0.006

3.048

0.496

1.464

10%

FC

25.463

33.525

19.551

30.748

23.431

26.445

26.527

MC

25.445

33.579

21.189

31.083

26.098

26.325

27.287

[18]

25.470

33.583

20.040

30.970

23.688

26.524

26.713

Proposed

26.063

33.721

23.877

30.972

26.293

26.608

27.922

ΔPSNR

0.593

0.138

3.837

0.002

2.6050

0.084

1.210

15%

FC

23.466

32.394

17.479

29.056

20.595

24.308

24.550

MC

23.481

32.634

19.937

29.506

22.249

24.447

25.376

[18]

23.666

32.480

18.336

29.114

20.762

24.557

24.819

Proposed

24.399

33.114

21.557

28.902

23.547

24.917

26.073

ΔPSNR

0.733

0.634

3.221

-0.212

2.785

0.360

1.254

20%

FC

22.117

31.900

16.568

27.301

19.939

22.948

23.462

MC

22.220

31.747

19.048

28.247

22.089

23.095

24.408

[18]

22.334

31.964

17.693

27.213

20.726

22.858

23.798

Proposed

23.439

32.726

20.698

27.811

22.921

23.446

25.174

ΔPSNR

1.105

0.762

3.005

0.598

2.195

0.588

1.376

Table 3

PSNR comparison of our proposed algorithm with[18] for error frames only ( B= 8)

 

Sequences

Ballroom

Vassar

Race1

Exit

AkkoKayo

Flamenco

Average

5%

Error free

35.520

34.991

35.792

37.275

36.906

38.505

36.498

[18]

22.114

33.049

16.732

30.809

21.823

25.485

25.002

Proposed

22.659

33.325

21.309

30.585

25.286

26.378

26.590

ΔPSNR

0.545

0.276

4.577

-0.224

3.463

0.893

1.588

10%

Error free

35.519

34.970

35.772

37.225

36.953

38.480

36.487

[18]

21.479

32.296

16.590

29.216

20.429

24.088

24.016

Proposed

22.185

32.582

20.847

29.131

23.299

24.456

25.417

ΔPSNR

0.706

0.286

4.257

-0.085

2.870

0.368

1.400

15%

Error free

35.513

34.973

35.762

37.224

36.972

38.439

36.481

[18]

20.854

31.360

16.261

27.885

18.848

22.928

23.023

Proposed

21.593

32.327

19.747

27.674

21.785

23.449

24.429

ΔPSNR

0.739

0.967

3.486

-0.211

2.937

0.521

1.407

20%

Error free

35.513

34.965

35.790

37.219

36.950

38.462

36.483

[18]

20.134

32.120

15.894

26.341

19.026

21.567

22.514

Proposed

21.143

31.973

19.151

26.897

21.427

22.235

23.804

ΔPSNR

1.009

-0.147

3.257

0.556

2.401

0.668

1.291

Tables 4 and 5 list the PSNR comparison for the case that B is 16. From these tables, we can observe that even though the basic error concealing block size has been extended to 16, our proposed algorithm can still achieve PSNR improvement when compared to [18]. On average, our proposed algorithm can receive 1.346- and 1.784-dB PSNR improvement compared to [18] for entire frame and error frame only cases, respectively. From Tables 2, 3, 4, and 5, we can observe that the PSNR improvement of smaller B is better than that of larger B. This situation can be explained as follows. In general, the larger B will contain more objects within a single block. Intuitively, it will not be easy to find a matching block from the temporal or inter-view directions which contains multiple objects. For smaller B, multiple objects can be possibly divided into multiple blocks and thus leads to the ease of finding matching blocks. Table 6 tabulates the decoding time of our proposed algorithm when compared to the error frame decoding.
Table 4

PSNR comparison of our proposed algorithm with other methods for entire frames ( B= 16)

 

Sequences

Ballroom

Vassar

Race1

Exit

AkkoKayo

Flamenco

Average

Error free

35.499

34.957

35.820

37.214

36.930

38.448

36.478

5%

FC

29.077

34.396

22.635

33.617

28.069

29.673

29.578

MC

29.323

34.494

23.885

33.721

29.610

29.818

30.142

[18]

29.604

34.382

22.765

33.686

28.022

29.705

29.694

Proposed

30.441

34.460

26.631

33.558

31.159

30.607

31.143

ΔPSNR

0.837

0.078

3.866

-0.128

3.137

0.902

1.449

10%

FC

25.463

33.525

19.551

30.748

23.431

26.445

26.527

MC

25.445

33.579

21.189

31.083

26.098

26.325

27.287

[18]

25.782

33.509

20.141

30.769

23.870

26.468

26.757

Proposed

26.668

33.831

23.684

31.445

25.872

26.509

28.002

ΔPSNR

0.886

0.323

3.543

0.676

2.002

0.041

1.245

15%

FC

23.466

32.394

17.479

29.056

20.595

24.308

24.550

MC

23.481

32.634

19.937

29.506

22.249

24.447

25.376

[18]

23.933

32.458

18.515

28.960

20.719

24.541

24.854

Proposed

25.131

33.075

21.264

29.478

22.781

25.209

26.156

ΔPSNR

1.198

0.617

2.749

0.518

2.062

0.668

1.302

20%

FC

22.117

31.900

16.568

27.301

19.939

22.948

23.462

MC

22.220

31.747

19.048

28.247

22.089

23.095

24.408

[18]

22.506

31.942

17.867

27.220

20.881

23.016

23.905

Proposed

24.040

32.996

20.741

28.031

22.787

23.167

25.294

ΔPSNR

1.534

1.054

2.874

0.811

1.906

0.151

1.389

Table 5

PSNR comparison of our proposed algorithm with[18] for error frames only ( B= 16)

 

Sequences

Ballroom

Vassar

Race1

Exit

AkkoKayo

Flamenco

Average

5%

Error free

35.520

34.991

35.792

37.275

36.906

38.505

36.498

[18]

22.229

33.039

16.674

31.007

21.862

25.495

25.051

Proposed

23.965

33.759

21.246

31.382

26.274

26.474

27.183

ΔPSNR

1.736

0.720

4.572

0.375

4.412

0.979

2.132

10%

Error free

35.519

34.970

35.772

37.225

36.953

38.480

36.487

[18]

21.701

32.113

16.656

29.200

20.596

24.054

24.053

Proposed

23.291

32.948

20.843

30.043

23.349

24.305

25.797

ΔPSNR

1.590

0.835

4.187

0.843

2.753

0.251

1.743

15%

Error free

35.513

34.973

35.762

37.224

36.972

38.439

36.481

[18]

20.993

31.330

16.401

27.757

18.831

22.916

23.038

Proposed

22.603

32.306

19.633

28.399

21.351

23.715

24.668

ΔPSNR

1.610

0.976

3.232

0.642

2.520

0.799

1.630

20%

Error free

35.513

34.965

35.790

37.219

36.950

38.462

36.483

[18]

20.295

31.080

16.020

26.373

19.191

21.705

22.444

Proposed

22.105

32.398

19.234

27.279

21.488

21.946

24.075

ΔPSNR

1.810

1.318

3.214

0.906

2.297

0.241

1.631

Table 6

Average decoding time comparison of our proposed algorithm (Ballroom sequence) (ms/frame)

Packet loss rate

Error free

Proposed

Overhead (%)

5%

112.98

165.53

46.5

10%

112.98

231.22

104.7

15%

112.98

316.06

179.7

20%

112.98

362.59

220.9

Figures 10, 11, and 12 exhibit the subjective quality comparisons for our proposed algorithm with [18]. From these figures, it is very obvious that our proposed algorithm can significantly improve the subjective quality results. In general, our proposed algorithm can efficiently reduce the broken image effects.
Figure 10

Subjective comparison of Ballroom sequence at the 30th frame ( B= 8). (a) Error free (35.444 dB). (b) Concealed frame by [18] (19.093 dB). (c) Concealed frame by the proposed algorithm (20.134 dB).

Figure 11

Subjective comparison of Exit sequence at the 20th frame ( B= 8). (a) Error free (37.142 dB). (b) Concealed frame by [18] (27.040 dB). (c) Concealed frame by the proposed algorithm (28.534 dB).

Figure 12

Subjective comparison of Race1 sequence at the 55th frame ( B= 8). (a) Error free (36.016 dB). (b) Concealed frame by [18] (14.985 dB). (c) Concealed frame by the proposed algorithm (19.021 dB).

4. Conclusions

To deal with entire frame loss problem in multi-view video decoding, this paper proposes an error concealment algorithm by considering the relationship between motion vectors and disparity vectors. Based on the parallelogram-like motion relationship, a joint SAD minimization approach is proposed to find the best block for concealing the current error block. Through the help of the proposal, the error propagation problem can thus be reduced. Simulation results demonstrate that our proposed algorithm outperforms previous work in terms of subject and objective quality measurements.

Declarations

Authors’ Affiliations

(1)
Department of Electrical Engineering, National Dong Hwa University
(2)
Department of Video Coding Core Technology, Industrial Technology Research Institute

References

  1. Ho YS, Oh KJ: Overview of multi-view video coding. In Proceedings of International Workshop on Systems, Signals and Image Processing, 2007 and 6th EURASIP Conference focused on Speech and Image Processing, Multimedia Communications and Services. Maribor; 2007:5-12.Google Scholar
  2. Vetro A, Wiegand T, Sullivan GJ: Overview of the stereo and multi-view video coding extensions of the H.264/MPEG-4 AVC standard. IEEE Proc. 2011, 99(4):626-642.View ArticleGoogle Scholar
  3. Wiegand T, Sullivan GJ, Bjontegaard G, Luthra A: Overview of the H.264/AVC video coding standard. IEEE Trans. Circ. Syst. Vid. Technol. 2003, 13(7):560-576.View ArticleGoogle Scholar
  4. Wiegand T, Sullivan G: Draft ITU-T recommendation and final draft international standard of joint video specification (ITU-T Rec. H.264/ISO/IEC 14496-10 AVC). In Joint Video Team of ISO/IEC MPEG and ITU-T VCEG, JVT-G050. Pattaya; 2003.Google Scholar
  5. Wang Y, Tham JY, Lee WS, Goh KH: Pattern selection for error-resilient slice interleaving based on receiver error concealment technique. In Proceedings of IEEE International Conference on Multimedia and Expo. Barcelona; 2011.Google Scholar
  6. Micallef BW, Debono CJ: An analysis on the effect of transmission errors in real-time H.264-MVC bit-streams. In Proceedings of IEEE Mediterranean Electrotechnical Conference MELECON. Valletta; 2010:1215-1220.Google Scholar
  7. Dissanayake MB, De Silva DVSX, Worrall ST, Fernando WAC: Error resilience technique for multi-view coding using redundant disparity vectors. In Proceedings of IEEE International Conference on Multimedia and Expo. Suntec; 2010:1712-1717.Google Scholar
  8. Xiao J, Tillo T, Lin C, Zhao Y: Joint redundant motion vector and intra macroblock refreshment for video transmission. EURASIP J. Image Vid. Process. 2011., 2011(12): doi:10.1186/1687-5281-2011-12Google Scholar
  9. Ye S, Ouaret M, Dufaux F, Ebrahimi T: Improved side information generation for distributed video coding by exploiting spatial and temporal correlations. EURASIP J. Image Vid. Process. 2009, 2009: 683510. doi:10.1155/2009/683510Google Scholar
  10. Xiang X, Zhao D, Ma S, Gao W: Auto-regressive model based error concealment scheme for stereoscopic video coding. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing. Prague; 2011:849-852.Google Scholar
  11. Lee PJ, Kuo KT: An adaptive error concealment method selection algorithm for multi-view video coding. In Proceedings of IEEE International Conference on Consumer Electronics. Las Vegas; 2013:474-475.Google Scholar
  12. Stankiewicz O, Wegner K, Domanski M: Error concealment for MVC and 3D video coding. In Proceedings of Picture Coding Symposium. Nagoya; 2010:498-501.View ArticleGoogle Scholar
  13. Lee SH, Lee SH, Cho NI, Yang JH: A motion vector prediction method for multi-view video coding. In Proceedings of International Conference on Intelligent Information Hiding and Multimedia Signal Processing. Harbin; 2008:1247-1250.Google Scholar
  14. Liu S, Chen Y, Wang YK, Gabbouj M, Hannuksela MM, Li H: Frame loss error concealment for multi-view video coding. In Proceedings of IEEE International Symposium on Circuits and Systems. Seattle; 2008:3470-3473.Google Scholar
  15. Liang L, Ma R, An P, Liu C: An effective error concealment method used in multi-view video coding. In Proceedings of International Congress on Image and Signal Processing. Shanghai; 2011:76-79.Google Scholar
  16. Chung TY, Sull S, Kim CS: Frame loss concealment for stereoscopic video plus depth sequences. IEEE Trans. Consum. Electron. 2011, 57(3):1336-1344.View ArticleGoogle Scholar
  17. Chen Y, Cai C, Ma KK: Stereoscopic video error concealment for missing frame recovery using disparity-based frame difference projection. In Proceedings of IEEE International Conference on Image Processing. Cairo; 2009:4289-4292.Google Scholar
  18. Zhou Y, Hou C, Pan R, Yuan Z, Yang L: Distortion analysis and error concealment for multi-view video transmission. In Proceedings of IEEE International Symposium on Broadband Multimedia Systems and Broadcasting. Shanghai; 2010:1-5.Google Scholar

Copyright

© Kuan et al.; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.