OBJECT-ORIENTED BACKDOOR ATTACK AGAINST IMAGE CAPTIONING
Meiling Li, Nan Zhong, Xinpeng Zhang, Zhenxing Qian, Sheng Li
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:10:11
Backdoor attack against image classification task has been widely studied and proven to be successful, while there exist few researches on backdoor attack against vision-language models. In this paper, we explore backdoor attack towards image captioning models by poisoning training data. Assuming the attacker has total access to the training dataset, and cannot intervene in model construction or training process. Specifically, a portion of benign training samples is randomly selected to be poisoned. Afterwards, considering that the captions are usually unfolded around objects in an image, we design an object-oriented method to craft poisons, which aims to modify pixel values by a slight range with the modification number proportional to the scale of the current detected object region. After training with the poisoned data, the attacked model behaves normally on benign images, but for poisoned images, the model will generate some sentences irrelevant to the given image. The attack controls the model behavior on specific test images without scarifying the generation performance on benign test images. Our method proves the weakness of image captioning models to backdoor attack and we hope this work can raise the awareness of defending against backdoor attack in the image captioning field.