Pixstabnet: Fast Multi-Scale Deep Online Video Stabilization With Pixel-Based Warping
Yu-Ta Chen, Kuan-Wei Tseng, Yao-Chih Lee, Chun-Yu Chen, Yi-Ping Hung
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:08:23
Online video stabilizaton is increasingly needed for real-time applications such as live streaming, drone remote control, and video communication. We propose a multi-scale convolutional neural network (PixStabNet) which stabilizes video in real time without using future frames. Instead of calculating a global homography or multiple homographies, we estimate a pixel-based warping map to make the transformation of each pixel to achieve more precise modelling. In addition, we propose well-designed loss functions along with a two-stage training scheme to enhance network robustness. The quantitative result shows that our method outperforms other learning-based online methods in terms of stability with excellent geometric and temporal consistency. Moreover, to the best of our knowledge, the proposed algorithm is the most efficient approach for video stabilization. The models and results are available at: https://yu-ta-chen.github.io/PixStabNet.