Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 13:11
27 Oct 2020

The recent works in the tensor decomposition of convolutional neural networks have paid little attention to finetuning the decomposed models more effectively. We propose to improve the accuracy as well as the adversarial robustness of decomposed networks over existing noniterative methods by distilling knowledge from the computationally intensive undecomposed (teacher) model to the decomposed (student) model. Through a series of experiments, we demonstrate the effectiveness of knowledge distillation with different loss functions and compare it to the existing fine-tuning strategy of minimizing cross-entropy loss with ground truth labels. Finally, we conclude that the student networks obtained by the proposed approach are superior not only in terms of accuracy but also adversarial robustness, which is often compromised in the existing methods.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00