Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 13:19
04 May 2020

Previous works have discovered that general DNNs are vulnerable under some subtle and specific perturbations on classification tasks. In recent years, deep neural networks (DNNs) unfolded through sparse coding algorithms have achieved great success in sparse coding problem. Some applications of sparse coding have important strategic significance in many cases, and the security of learned models is vital. However, it has not achieved enough attentions. Our paper is the first work to study the adversarial performance on unfolded DNNs for sparse coding. We first verify the effectiveness of the existing attack or defense strategies, and surprisingly discover the defense strategies are useless. In addition, we propose a special attack strategy to eliminate the element on a certain dimension of the sparse output code of DNNs. Furthermore, we propose a succinct black-box attack strategy, which could generate adversarial perturbations without knowing the parameters of DNNs and data.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00