Skip to main content

Accelerating Distributed Deep Learning By Adaptive Gradient Quantization

Jinrong Guo, Wang Wang, Jizhong Han, Yijun Lu, Songlin Hu, Ruixuan Li

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 13:53
04 May 2020

To accelerate distributed deep learning, gradient quantization technique is widely used to reduce the communication cost. However, the existing quantization schemes suffer from either model accuracy degradation or low compression ratio (arisen from a redundant setting of quantization level or high overhead in determining the level). In this work, we propose a novel adaptive quantization scheme (AdaQS) to explore the balance between model accuracy and quantization level. AdaQS determines the quantization level automatically according to gradient's mean to standard deviation ratio (MSDR). Then, to reduce the quantization overhead, we employ a computationally-friendly way of moment estimation to calculate the MSDR. Finally, theoretical analysis of AdaQS's convergence is conducted for non-convex objectives. Experiments demonstrate that AdaQS performs excellently on very deep model GoogleNet with 2.55% accuracy improvement relative to vanilla SGD and achieves 1.8x end-to-end speedup on AlexNet in a distributed cluster with 4*4 GPUs.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00