Skip to main content

LOCAL TEXTURE COMPLEXITY GUIDED ADVERSARIAL ATTACK

Jiefei Zhang, Jie Wang, Wanli Lyu, Zhaoxia Yin

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
Poster 09 Oct 2023

Extensive research revealed that deep neural networks are vulnerable to adversarial examples. In addition, recent studies have demonstrated that convolutional neural networks tend to recognize the texture (high-frequency components) rather than the shape (low-frequency components) of images. Thus, crafting adversarial perturbation in the frequency domain is proposed to enhance the attack strength. However, these methods either will increase the perceptibility of adversarial examples to the human visual system (HVS) or increase the computational effort in generating adversarial examples. To generate adversarial examples with better imperceptibility while consuming less computational effort, we propose an adversarial attack method to construct adversarial examples in the frequency domain with guidance from the local texture complexity of the image. Experiments on ImageNet and CIFAR-10 show that the proposed method is effective in generating adversarial examples imperceptible to the HVS.

More Like This

01 Feb 2024

P4.16-Adversarial Examples

1.00 pdh 0.10 ceu
  • SPS
    Members: Free
    IEEE Members: Free
    Non-members: Free
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00