Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:08:49
12 May 2022

It is well-recognized that 3D visual tasks based on deep neural networks are vulnerable to adversarial attacks. Existing methods to generate adversarial examples are mainly developed from injecting imperceptible perturbations into the inputs. However, aggressive characteristic of geometric transformations, which are common in 3D objects, are rarely investigated. In this paper, we propose the non-rigid transformation based adversarial attack method against 3D object tracking. The adversarial example is generated by deforming parts of the tracking template, leading to deviation of the tracking predictions from the ground truth. Specifically, a clustering-based region segmentation module is designed to divide the tracking template into local regions. Furthermore, an objective function, which combines IoU loss, confidence loss and distance loss, is leveraged to update the poses of local regions. Experiments conducted on an efficient 3D tracker demonstrate that 3D trackers are extremely vulnerable to non-rigid deformation.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00