Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:10:42
10 Jun 2021

Low-precision deep neural networks (DNNs) are very needed for efficient implementations, but severe quantization of weights often sacrifices the generalization capability and lowers the test accuracy. We present a new quantized neural network optimization approach, stochastic quantized weight averaging (SQWA), to design low-precision DNNs with good generalization capability using model averaging. The proposed approach includes (1) floating-point model training, (2) direct quantization of weights, (3) capturing multiple low precision models during retraining with cyclical learning rates, (4) averaging the captured models, and (5) re-quantizing the averaged model and fine-tuning it with low-learning rates. With SQWA training, we could develop the best performing QDNNs for image classification on ImageNet datasets and also for semantic segmentation on Pascal VOC 2012 dataset.

Chairs:
Xinmiao Zhang

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: Free
    Non-members: Free