Skip to main content

JMPNET: JOINT MOTION PREDICTION FOR LEARNING-BASED VIDEO COMPRESSION

Dongyang Li, Zhenhong Sun, Zhiyu Tan, Xiuyu Sun, Fangyi Zhang, Yichen Qian, Hao Li

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:05:40
09 May 2022

In recent years, more attention is attracted by learning-based approaches in the field of video compression. Recent methods of this kind normally consist of three major components: intra-frame network, motion prediction network, and residual network, among which the motion prediction part is particularly critical for video compression. Benefiting from the optical flow which enables dense motion prediction, recent methods have shown competitive performance compared with traditional codecs. However, problems such as tail shadow and background distortion in the predicted frame remain unsolved. To tackle these problems, JMPNet is introduced in this paper to provide more accurate motion information by using both optical flow and dynamic local filter as well as an attention map to further fuse these motion information in a smarter way. Experimental results show that the proposed method surpasses state-of-the-art (SOTA) rate-distortion (RD) performance in the most data-sets.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • CIS
    Members: Free
    IEEE Members: Free
    Non-members: Free