Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:09:09
11 Jun 2021

The traditional video compressed sensing (VCS) algorithms have elegant theoretical interpretability. However, the deterministic sparse transformation used in these algorithms usually can not satisfy the sparsity need, which results in poor reconstruction quality. Also, the optimization process is slow. Deep learning can learn data-driven transformation while achieving fast reconstruction. This paper proposes an iterative motion compensation and residual reconstruction network for VCS, called ImrNet. ImrNet follows the iterative optimization method of MC-BCS-SPL framework, and each module in ImrNet is trained independently. In addition, we design a motion-compensated network (MCNet) to achieving adaptive fusion among frames by using the image semantic segmentation method to obtain the probability map as fusion weights of each frame. The proposed MCNet is used to generate fusion-compensated frame in each iteration of ImrNet. Experimental results show that ImrNet can achieve good reconstruction results with only two iterations. Its reconstruction quality is better than state-of-the-art VCS methods.

Chairs:
Yuvraj Parkale

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00