Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:08:34
12 May 2022

The training process of federated learning is known to be vulnerable to adversarial attacks (e.g., backdoor attack). Previous works showed that differential privacy (DP) can be used to defend against backdoor attacks, yet at the cost of vastly losing model utility. To address this issue, we in this paper propose a defense method based on differential privacy, called Clip Norm Decay (CND), to maintain utility when defending against backdoor attacks with DP. CND reduces the injected noise by decreasing the clipping threshold of model updates through the whole training process. In particular, our algorithm bounds the norm of malicious updates by adaptively setting the appropriate thresholds according to the current model updates. Empirical results show that CND can substantially enhance the accuracy of the main task when defending against backdoor attacks. Moreover, extensive experiments demonstrate that our method performs better defense than the original DP, further reducing the attack success rate, even in a strong assumption of threat model.

More Like This

  • CIS
    Members: Free
    IEEE Members: Free
    Non-members: Free
  • CIS
    Members: Free
    IEEE Members: Free
    Non-members: Free
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00