Combating False Sense of Security: Breaking the Defense of Adversarial Training via Non-Gradient Adversarial Attack
Mingyuan Fan, Wenzhong Guo, Ximeng Liu, Yang Liu, Cen Chen, Shengxing Yu
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:09:04
Adversarial training is believed to be the most robust and effective defense method against adversarial attacks. Gradient-based adversarial attack methods are generally adopted to evaluate the effectiveness of adversarial training. However, in this paper, by diving into the existing adversarial attack literature, we find that adversarial examples generated by these attack methods tend to be less imperceptible, which may lead to an inaccurate estimation for the effectiveness of the adversarial training. The existing adversarial attacks mostly adopt gradient-based optimization methods and such optimization methods have difficulties in searching the most effective adversarial examples (i.e., the global extreme points). On the contrast, in this work, we propose a novel Non-Gradient Attack (NGA) to overcome the above-mentioned problem. Extensive experiments show that NGA significantly outperforms the state-of-the-art adversarial attacks on Attack Success Rate (ASR) by 2%~7%.