Neural texture transfer assisted video coding with adaptive up-sampling

Li Yu, Wenshuai Chang, Weize Quan, Jimin Xiao, Dong Ming Yan, Moncef Gabbouj*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Deep learning techniques have been extensively investigated for the purpose of further increasing the efficiency of traditional video compression. Some deep learning techniques for down/up-sampling-based video coding were found to be especially effective when the bandwidth or storage is limited. Existing works mainly differ in the super-resolution models used. Some works simply use a single image super-resolution model, ignoring the rich information in the correlation between video frames, while others explore the correlation between frames by simply concatenating the features across adjacent frames. This, however, may fail when the textures are not well aligned. In this paper, we propose to utilize neural texture transfer which exploits the semantic correlation between frames and is able to explore the correlated information even when the textures are not aligned. Meanwhile, an adaptive group of pictures (GOP) method is proposed to automatically decide whether a frame should be down-sampled or not. Experimental results show that the proposed method outperforms the standard HEVC and state-of-the-art methods under different compression configurations. When compared to standard HEVC, the BD-rate (PSNR) and BD-rate (SSIM) of the proposed method are up to -19.1% and -26.5%, respectively.

Original languageEnglish
Article number116754
JournalSignal Processing: Image Communication
Volume107
DOIs
Publication statusPublished - Sept 2022

Keywords

  • Deep learning
  • High-efficiency video coding (HEVC)
  • Low bitrate
  • Machine learning
  • Reference-based super-resolution
  • Video compression

Fingerprint

Dive into the research topics of 'Neural texture transfer assisted video coding with adaptive up-sampling'. Together they form a unique fingerprint.

Cite this