Bottom-Up Saliency Meets Top-Down Semantics For Object Detection
Tomoya Sawada, Teng-Yok Lee, Masahiro Mizuno
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:13:00
While convolution neural networks (CNNs) successfully boost the accuracy of object detection algorithms, it is still challenging to detect far-away objects since they can be tiny in an image. To enhance CNNs' ability for tiny object detection, this paper presents an algorithm called Saliency-Guided Feature Module (SGFM). Based on low-level image features, we compute the saliency over the image to indicate where foreground objects could be, and use SGFM to enhance the CNN's feature maps that focus on similar areas.We also present a new dataset named Camera Monitoring System Driving Dataset (CMS-DD) where images were captured from the view angle of side mirrors on driving vehicles so far-away objects can look even tinier. Our experiments show that SGFMs can further improve a recent state-of-the-art object detector in practical driving scenes like CMS-DD.