Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:07:09
22 Sep 2021

We present a method for estimating temporally stable depth video from a sequence of images. We extend the prior work aimed at video depth estimation, Neural-RGBD, which proposed to use temporal information by accumulating a depth probability volume over time. We propose three simple yet effective ideas to gain improvement: (1) temporal attention module to select and propagate only the meaningful temporal information, (2) geometric warping operation to warp neighbor features in the way of preserving geometry cues, and (3) scale-invariant loss to relieve the inherent scale ambiguity problem in monocular depth estimation task. We demonstrate the efficiency of proposed ideas by comparing our proposed network STAD with the state-of-the-arts. Moreover, we compare STAD with its per-frame network STAD-frame to show the importance of utilizing temporal information. The experimental results show that STAD significantly improved the baseline accuracy without a large parameter increase.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00