Skip to main content

Lightweight Neural Networks from PCA & LDA Based Distilled Dense Neural Networks

Mohamed El Amine Seddik, Hassane Essafi, Abdallah Benzine, Mohamed Tamaazousti

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 04:49
28 Oct 2020

This paper presents two methods for building lightweight neural networks with similar accuracy than heavyweight ones with the advantage to be less greedy in memory and computing resources. So it can be implemented in edge and IoT devices. The presented distillation methods are respectively based on Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). The two methods basically rely on the successive dimension reduction of a given dense neural network (teacher) hidden features, and the learning of a \textit{smaller} neural network (student) which solves the initial learning problem along with a mapping problem to the reduced successive features spaces. The presented methods are compared to baselines --learning the student networks from scratch--, and we show that the additional mapping problem significantly improves the performance (accuracy, memory and computing resources) of the student networks.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00