Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:09:37
08 May 2022

Adversarial examples have emerged as a severe concern for the security of neural networks. However, the Lp-distances, typically used as a similarity constraint, often fail to capture human perceived similarity. Under challenging scenarios, such as attacking a defended model, this discrepancy leads to the severe degradation of image fidelity. In this paper, we find adversarial examples that better match the natural distribution of the input domain by integrating signal processing techniques into the attack framework, dynamically altering the allowed perturbation with a Rule Adjustable Distance (RAD). The framework allows us to easily incorporate structural similarity, Otsu's method, or variance filtering to increase the fidelity of adversarial images while still adhering to an LP-bound.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00