Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:06:57
21 Sep 2021

The video frame interpolation algorithm can improve temporal resolution by inserting non-existent frames in the video sequence. With the help of skip connections, many kernel-based methods train the deep neural networks to accurately establish the complicated spatiotemporal relationship between pixels in adjacent frames. Still, these connections are only performed in the feature dimension. To this end, we introduce the Over-Parameterized Sharing Networks (OPS-Net) to implement weight sharing under different layers, capable of integrating deep and shallow features more directly. Specifically, we over-parameterize each convolutional layer to capture movement information efficiently, where the additional trainable weights from distinct ones will be shared. After the training, the additional weights will be fused into the conventional convolutional layer and do not increase the test phaseƒ??s computation. Experimental results show that our method can generate favorable frames compared with several state-of-the-art approaches.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00