Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
Poster 09 Oct 2023

While extensive studies have pushed the limit of the transferability of untargeted attacks, transferable targeted attacks remain extremely challenging. This paper finds that the labels with high confidence in the source model are also likely to retain high confidence in the target model. This simple and intuitive observation inspires us to carefully deal with the high-confidence labels in generating targeted adversarial examples for better transferability. Specifically, we integrate the untargeted loss function into the targeted attack to push the adversarial examples away from the original label while approaching the target label. Furthermore, we suppress other high-confidence labels in the source model with an orthogonal gradient. We validate the proposed scheme by mounting targeted attacks on the ImageNet dataset. Experiments on various scenarios show that our proposed scheme improves the state-of-the-art targeted attacks in transferability. Our code is available at: https://github.com/zengh5/Transferable_targeted_attack.

More Like This

01 Feb 2024

P4.16-Adversarial Examples

1.00 pdh 0.10 ceu
  • SPS
    Members: Free
    IEEE Members: Free
    Non-members: Free
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00