Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:09:26
13 May 2022

In recent years, deep convolutional neural networks have achieved amazing results on multiple tasks. However, these complex network models often require significant computation resources and energy costs, so that they are difficult to deploy to power-constrained devices, such as IoT systems, mobile phones, embedded devices, etc. Aforementioned challenges can be overcome through model compression like network pruning. In this paper, we propose an adaptive channel pruning module (ACPM) to automatically adjust the pruning rate with respect to each channel, which is more efficient to prune redundant channel parameters, as well as more robust to datasets and backbones. With one-shot pruning strategy design, the model compression time can be saved significantly. Extensive experiments demonstrate that ACPM makes tremendous improvement on both pruning rate and accuracy, and also achieves the state-of-the-art results on a series of different networks and benchmarks.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00