Ops-Net: Over-Parameterized Sharing Networks For Video Frame Interpolation
Zhen-Fang Wang, Yan-Jiang Wang, Shuai Shao, Bao-Di Liu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:06:57
The video frame interpolation algorithm can improve temporal resolution by inserting non-existent frames in the video sequence. With the help of skip connections, many kernel-based methods train the deep neural networks to accurately establish the complicated spatiotemporal relationship between pixels in adjacent frames. Still, these connections are only performed in the feature dimension. To this end, we introduce the Over-Parameterized Sharing Networks (OPS-Net) to implement weight sharing under different layers, capable of integrating deep and shallow features more directly. Specifically, we over-parameterize each convolutional layer to capture movement information efficiently, where the additional trainable weights from distinct ones will be shared. After the training, the additional weights will be fused into the conventional convolutional layer and do not increase the test phaseƒ??s computation. Experimental results show that our method can generate favorable frames compared with several state-of-the-art approaches.