Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:08:14
22 Sep 2021

One promising way to improve the performance of a small deep network is knowledge distillation. Performances of smaller student models with fewer parameters and lower computational cost can be comparable to that of larger teacher models in specific computer vision tasks. Knowledge distillation is especially attractive for the high-accuracy real-time crowd counting task in our daily lives, where the computational resource can be limited and the model efficiency is extremely important. In this paper, we propose a novel task-specific knowledge distillation framework for crowd counting, named ShuffleCount. Its main contributions are two-fold: First, different from existing frameworks, our task-specific ShuffleCount effectively learns from the teacher network through hierarchic feature regulation, and better avoids negative knowledge transferred from the teacher. Second, the proposed student network, i.e., the optimized Shufflenet, shows promising performances. When tested on the benchmark dataset Shanghai Tech A, it achieves a 15\% higher accuracy yet keeps low computational cost when compared with the state-of-the-art MobileCount. Our code is available online at https://github.com/JiangMinyang/CC-KD.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00