Skip to main content
  • Research Article
  • Open access
  • Published:

Joint Rendering and Segmentation of Free-Viewpoint Video

Abstract

This paper presents a method that jointly performs synthesis and object segmentation of free-viewpoint video using multiview video as the input. This method is designed to achieve robust segmentation from online video input without per-frame user interaction and precomputations. This method shares a calculation process between the synthesis and segmentation steps; the matching costs calculated through the synthesis step are adaptively fused with other cues depending on the reliability in the segmentation step. Since the segmentation is performed for arbitrary viewpoints directly, the extracted object can be superimposed onto another 3D scene with geometric consistency. We can observe that the object and new background move naturally along with the viewpoint change as if they existed together in the same space. In the experiments, our method can process online video input captured by a 25-camera array and show the result image at 4.55 fps.

Publisher note

To access the full article, please see PDF.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Masato Ishii.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Ishii, M., Takahashi, K. & Naemura, T. Joint Rendering and Segmentation of Free-Viewpoint Video. J Image Video Proc 2010, 763920 (2010). https://doi.org/10.1155/2010/763920

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2010/763920

Keywords