RESIDUAL GUIDED DEBLOCKING WITH DEEP LEARNING
Wei Jia, Li Li, Zhu Li, Xiang Zhang, Shan Liu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 12:18
The block-based coding structure in hybrid video coding framework inevitably introduces compression artifacts such as blocking, ringing etc. Recently, neural network based loop filters are proposed to enhance the reconstructed frame. But the coding information has not been full utilized in the design of neural networks. Therefore, in this paper, we propose a Residual-Reconstruction-based Convolutional Neural Network (RRCNN) to improve the coding efficiency to its full extent, where the residual frame is fed into the network as a supplementary input to the reconstructed frame. In essence, the residual signal can provide effective information about block partitions and can help in recognizing smooth, edge and texture regions in a picture, such that more adaptive parameters can be trained to handle different texture characteristics. In addition, the network structure has been carefully designed in the proposed dual-input network to learn useful context information from two signals with their distinct features. To the best of our knowledge, this is the first work that employs the residual signal in the CNN-based in-loop filter for video coding. The experimental results show that our proposed RRCNN approach leads to significant BD-rate savings compared to HEVC and the state-of-the-art CNN-based schemes, indicating residual signal plays an important role in the enhancement of reconstructed video frames.