LOCAL TEXTURE COMPLEXITY GUIDED ADVERSARIAL ATTACK
Jiefei Zhang, Jie Wang, Wanli Lyu, Zhaoxia Yin
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Extensive research revealed that deep neural networks are vulnerable to adversarial examples. In addition, recent studies have demonstrated that convolutional neural networks tend to recognize the texture (high-frequency components) rather than the shape (low-frequency components) of images. Thus, crafting adversarial perturbation in the frequency domain is proposed to enhance the attack strength. However, these methods either will increase the perceptibility of adversarial examples to the human visual system (HVS) or increase the computational effort in generating adversarial examples. To generate adversarial examples with better imperceptibility while consuming less computational effort, we propose an adversarial attack method to construct adversarial examples in the frequency domain with guidance from the local texture complexity of the image. Experiments on ImageNet and CIFAR-10 show that the proposed method is effective in generating adversarial examples imperceptible to the HVS.