INFERENCE ACCELERATION OF DEEP LEARNING CLASSIFIERS BASED ON RNN
Fekhr Eddine Keddous, Nadiya Shvai, Arcadi Llanza, Amir Nakib
-
SPS
IEEE Members: $11.00
Non-members: $15.00
This paper proposes a hybrid strategy for accelerating image classification inference based on the Modern Continuous Hopfield Neural Network (MHNN). To implement this strategy, the fully connected layers of convolutional neural networks (CNNs) are replaced by the MHNN. The proposed hybrid architecture achieves promising results for image classification tasks, as demonstrated through experiments on multiple benchmark datasets, including ImageNet, and different CNN architectures. It offers a remarkable speedup in inference time (ranging from 1.12x to 1.6x) and significant compression in terms of the number of neural network parameters (ranging from 1.32x to 49.37x), while maintaining high accuracy. Furthermore, the proposed CNN-MHNN model achieves an accuracy of 99.18% on the Noisy MNIST dataset, outperforming state-of-the-art models with a 0.75% improvement for the Added White Gaussian Noise version.