Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 10:14
27 Oct 2020

In this paper, we present a cascaded context dependency module which is a highly lightweight module that can improve the performance of deep convolutional neural networks for various visual tasks. Inspired by the feature pyramid work in object detection and the context dependency work in image recognition, we consider to cascade the contexts of multi-scaled feature maps to aggregate the locality and globality in a local region. We further extract the dependency between original input and cascaded contexts for feature re-calibration. Without employing learnable layers, our method introduces almost no additional parameters and computations. Furthermore, Our building module can be seamlessly plugged into many existing CNN architectures to improve the performance. Experiments on ImageNet and MS COCO benchmarks indicate that our method can achieve results on par with or better than related work. Qualitatively, we achieve an absolute 1.42% (77.3137% vs. 75.8974%) top-1 classification accuracy improvement based on ResNet50 on ImageNet 2012 validation set with negligible computational overhead. Besides, our method yields significant gains on the MS COCO benchmark for the object detection task. All codes and models are made publicly available.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00