A Novel Sensitivity Metric For Mixed-Precision Quantization With Synthetic Data Generation
Donghyun Lee, Minkyoung Cho, Seungwon Lee, Joonho Song, Changkyu Choi
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:07:40
Post-training quantization is a representative technique for compressing neural networks, making them smaller and more efficient for deployment on edge devices. However, an inaccessible user dataset often makes it difficult to ensure the quality of the quantized neural network in practice. In addition, existing approaches may use a single uniform bit-width across the network, resulting in significant accuracy degradation at extremely low bit-widths. To utilize multiple bit-width, sensitivity metric plays a key role in balancing accuracy and compression. In this paper, we propose a novel sensitivity metric that considers the effect of quantization error on task loss and interaction with other layers. Moreover, we develop labeled data generation methods that are not dependent on a specific operation of the neural network. Our experiments show that the proposed metric better represents quantization sensitivity, and generated data are more feasible to apply to mixed-precision quantization.