Skip to main content

STEALTHY BACKDOOR ATTACK WITH ADVERSARIAL TRAINING

Le Feng, Sheng Li, Zhenxing Qian, Xinpeng Zhang

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:09:44
12 May 2022

Research shows that deep neural networks are vulnerable to backdoor attacks. The backdoor network behaves normally on clean examples, but once backdoor patterns are attached to examples, backdoor examples will be classified into the target class. In the previous backdoor attack schemes, backdoor patterns are not stealthy and may be detected. Thus, to achieve the stealthiness of backdoor patterns, we explored an invisible and example-dependent backdoor attack scheme. Specifically, we employ the backdoor generation network to generate the invisible backdoor pattern for each example, and backdoor patterns are not generic to each other. However, without other measures, such the backdoor attack scheme cannot bypass the neural cleanse detection. Thus, we propose adversarial training to bypass neural cleanse detection. Experiments show that the proposed backdoor attack achieves a considerable attack success rate, invisibility, and can bypass the existing defense strategies.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00