Skip to main content

Self-Supervised Depth Estimation Via Implicit Cues From Videos

Jianrong Wang, Ge Zhang, Zhenyu Wu, Xuewei Li, Li Liu

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:06:15
11 Jun 2021

In self-supervised monocular depth estimation, the depth discontinuity and motion objects’ artifacts are still challenging problems. Existing self-supervised methods usually utilize two views to train the depth estimation network and use a single view to make predictions. Compared with static views, abundant dynamic properties between video frames are beneficial to refine depth estimation, especially for dynamic objects. In this work, we improve the self-supervised learning framework for depth estimation using consecutive frames from monocular and stereo videos. The main idea is to exploit an implicit depth cue extractor which leverages dynamic and static cues to generate useful depth proposals. These cues can predict distinguishable motion contours and geometric scene structures. Moreover, a new high-dimensional attention module is proposed to extract a clear global transformation, which effectively suppresses uncertainty of local descriptors in high-dimensional space, resulting in a more reliable optimization in the learning framework. Experiments demonstrate that the proposed framework outperforms the state-ofthe-art on KITTI and Make3D datasets.

Chairs:
Eduardo A B da Silva

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: Free
    Non-members: Free