Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:11:28
20 Sep 2021

Many studies on reducing the adversarial vulnerability of deep neural networks have been published in the field of machine learning. To evaluate the actual robustness of networks, various adversarial attacks have been proposed. Most previous works have focused on white-box settings which assume that the adversary can have full access to the target models. Since they are not practical in real-world situations, recent studies on black-box attacks have received a lot of attention. However, existing black-box attacks have critical limitations, such as yielding a low attack success rate or relying too much on gradient estimation and decision boundaries. Those attacks are ineffective against weak defenses using gradient obfuscation. In this paper, we propose a novel gradient-free decision-based black-box attack using random search optimization. The proposed method only needs a hard-label (decision-based) and is effective against defenses using gradient obfuscation. Experimental results validate its query-efficiency and improved L2 distance.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00