SELF-ATTENTION DENSE DEPTH ESTIMATION NETWORK FOR UNRECTIFIED VIDEO SEQUENCES
Alwyn Mathew, Aditya Prakash Patra, Jimson Mathew
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 07:16
The dense depth estimation of a 3D scene has numerous applications, mainly in robotics and surveillance. LiDAR and radar sensors are the hardware solution for real-time depth estimation, but these sensors produce sparse depth maps and are sometimes unreliable. In recent years research aimed at tackling depth estimation using single 2D image has received a lot of attention. The deep learning based self-supervised depth estimation methods from the rectified stereo and monocular video frames have shown promising results. We propose a self-attention based depth and ego-motion network for unrectified images. We also introduce non-differentiable distortion of the camera into the training pipeline. Our approach performs competitively when compared to other established approaches that used rectified images for depth estimation.