Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 10:35
04 May 2020

In recent years, Siamese trackers have achieved great success in visual tracking. Siamese networks can achieve competitive performance in both accuracy and speed. However, they may suffer from the performance degradation due to the case of large pose variations, out-of-plane, etc. In this paper, we propose a novel real-time Channel Attention based Generative Network (AGSNet) for Robust Visual Tracking. AGSNet can better recognize the targets undergoing significant appearance variations and having similar distractors. The AGSNet model introduces a channel favored feature attention to the template branch to enhance the discriminative capacity and uses a simple generative network in the instance branch to capture a variety of target appearance changes. With the end-to-end offline training, our model can achieve robust visual tracking in a long temporal span. Experimental results on benchmark datasets OTB-2013 and OTB-2015, demonstrate that our proposed tracker outperforms other approaches while runs at more than 40 frames per second.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00