Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:08:15
12 May 2022

It is well known that constructing adversarial examples is important to make a machine learning system more stable by pointing out its vulnerabilities. We present a fundamental approach to adversarial attack to the image cropping system. Existing adversarial examples are mainly applied to classifiers or detectors. In contrast, the proposed method is applied to the saliency maps commonly used in deep-learning based image cropping models in social media. Our method perturbs input images and gives the cropping model an attack to change regions that should be cropped. We measure the amount of shift in the peak value of the saliency map, and quantitatively verify the effectiveness of the system by this amount. We demonstrate that the proposed method outperforms baseline methods with experiments on CAT2000 dataset.

More Like This

  • CIS
    Members: Free
    IEEE Members: Free
    Non-members: Free
01 Feb 2024

P4.16-Adversarial Examples

1.00 pdh 0.10 ceu
  • SPS
    Members: Free
    IEEE Members: Free
    Non-members: Free
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00