Texture Plus Depth Video Coding Using Camera Global Motion Information

Fei Cheng*, Tammam Tillo, Jimin Xiao, Byeungwoo Jeon

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

In video coding, traditional motion estimation methods work well for videos with camera translational motion, but their efficiency drops for other motions, such as rotational and dolly motions. In this paper, a motion-information-based three-dimensional (3D) video coding method is proposed for texture plus depth 3D video. The synchronized global motion information of the camera is obtained to assist the encoder improve its rate-distortion performance by projecting the temporal neighboring texture and depth frames into the position of the current frame, using the depth and camera motion information. Then, the projected frames are added into the reference buffer list as virtual reference frames. As these virtual reference frames could be more similar to the current to-be-encoded frame than the conventional reference frames, the required bits to represent the residual will be reduced. The experimental results demonstrate that the proposed scheme enhances the coding performance for all camera motion types and for various scene settings and resolutions using H.264 and HEVC standards, respectively. With the computer graphic sequences, for H.264, the average gain of texture and depth coding are up to 2 dB and 1 dB, respectively. For HEVC and HD resolution sequences, the gain of texture coding reaches 0.4 dB. For realistic sequences, up to 0.5 dB gain (H.264) is achieved for the texture video, while up to 0.7 dB gain is achieved for the depth sequences.

Original languageEnglish
Article number7918624
Pages (from-to)2361-2374
Number of pages14
JournalIEEE Transactions on Multimedia
Volume19
Issue number11
DOIs
Publication statusPublished - Nov 2017

Keywords

  • H.264
  • HD
  • HEVC
  • Three-dimensional (3D) video coding
  • global motion
  • temporal projection
  • texture plus depth
  • virtual reference frame

Cite this