Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:10:58
19 Oct 2022

introducing sparsity in a convnet has been an efficient way to reduce its complexity while keeping its performance almost intact. Most of the time, sparsity is introduced using a three-stage pipeline: 1) training the model to convergence, 2) pruning the model, 3) fine-tuning the pruned model to recover performance. The last two steps are often performed iteratively, leading to reasonable results but also to a time-consuming process. in our work, we propose to remove the first step of the pipeline and to combine the two others in a single training-pruning cycle, allowing the model to jointly learn the optimal weights while being pruned. We do this by introducing a novel pruning schedule, named One-Cycle Pruning (OCP), which starts pruning from the beginning of the training, and until its very end. Experiments conducted on a variety of combinations between architectures (VGG-16, ResNet-18), datasets (CIFAR-10, CIFAR-100, Caltech-101), and sparsity values (80%, 90%, 95%) show that not only OCP consistently outperforms common pruning schedules such as One-Shot, Iterative and Automated Gradual Pruning, but also that it drastically reduces the required training budget. Moreover, experiments following the Lottery Ticket Hypothesis show that OCP allows to find higher quality and more stable pruned networks.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00