Skip to main content

Cost Aware Adversarial Learning

Shashini De Silva, Jinsub Kim, Raviv Raich

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 11:47
04 May 2020

The problem of making the classifier design resilient to test data falsification is considered. In the literature, a few countermeasures have been proposed to defend machine learning algorithms against test data falsification, but a common assumption employed therein is that feature entries of test data are equally vulnerable to falsification. When test data entries consist of data collected from various sources such as different types of sensor devices, vulnerability levels of data entries to falsification attacks can differ significantly depending on how data creation and transmission procedures are secured. In this paper, we present an attack-cost-aware adversarial learning framework that takes into account the (potentially inhomogeneous) vulnerability characteristics of test data entries in designing an attack-resilient classifier. We demonstrate efficacy of the proposed approach using experiments with the MNIST handwritten digit database.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00