Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 12:08
04 May 2020

Tensor decomposition has been proved to be effective for solving many problems in signal processing and machine learning. Recently, tensor decomposition finds its advantage for compressing deep neural networks. In many applications of deep neural networks, it is critical to reduce the number of parameters and computation workload to accelerate inference speed in deployment of the network. Modern deep neural network consists of multiple layers with multi-array weights where tensor decomposition is a natural way to perform compression. It is achieved by decomposing the weight tensors in convolutional layers or fully-connected layers with specified tensor ranks (e.g. canonical ranks, tensor train ranks). Conventional tensor decomposition in compressing deep neural networks selects the ranks manually that requires tedious human efforts to finetune the performance. To overcome this issue, we propose a novel rank selection scheme, which is inspired by reinforcement learning, to automatically select ranks in recently studied tensor ring decomposition in each convolutional layer. Experimental results validate that our learning based rank selection significantly outperforms hand-crafted rank selection heuristics on a number of benchmark datasets, for the purpose of effectively compressing deep neural networks while maintaining comparable accuracy.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00