Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:09:06
10 Jun 2021

In this paper, we investigate the adversarial robustness of principal component analysis (PCA) algorithms. In the considered setup, there is a powerful adversary who can add a carefully designed data point to the original data matrix. The goal of the adversary is to maximize the distance between the subspace learned from the original data and the subspace obtained from the modified data. Different from most of the existing research using Asimov distance to measure such a distance, we leverage a more precise and sophisticated measurement, Chordal distance, which can be used to analyze the influence of an outlier on PCA more comprehensively. Our analysis shows that the first principal angle can be completely changed by an outlier and the second principal angle changes very little. We also demonstrate the performance of our strategy with experimental results on synthetic data and real data.

Chairs:
Wenwu Wang

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00