Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:05:37
21 Sep 2021

Monocular depth estimation finds a wide range of applications in modeling 3D scenes. Since it is expensive to collect ground truth labels to supervise training, plenty of works have been done in a self-supervised manner. A common practice is to train the network optimizing a photometric objective (i.e., view synthesis ) due to its effectiveness. However, this training objective is sensitive to optical changes and lacks a consideration of object-level cues, which leads to sub-optimal results in some cases, e.g., artifacts in complex regions and depth discontinuities around thin structures. We summarize them as depth ambiguities. In this paper, we propose an easy yet effective architecture, introducing semantic cues into supervision to solve the problems mentioned above. First, through our study on the problems, we figure out that they are due to the limitation of the commonly applied photometric reconstruction training objective. Then we come up with our method using semantic cues to encode the geometry constraint behind view synthesis. The proposed novel objective is more credible towards confusing pixels, also takes an object-level perception. Experiments show that without introducing extra inference complexity, our method alleviates depth ambiguities greatly and performs comparably with state-of-the-art methods on KITTI benchmark.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00