Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
Lecture 10 Oct 2023

The lack of robustness in neural network classifiers, especially when facing adversarial attacks, is a significant limitation for critical applications. While some researchers have suggested a connection between neuron coverage during training and vulnerability to adversarial perturbations, concrete experimental evidence supporting this claim is lacking. This paper empirically investigates the impact of maximizing neuron coverage during training and assess the effectiveness of adversarial attacks on under-covered neurons. Additionally, we explore the potential of leveraging coverage for designing more efficient attacks. Our experiments reveal no clear correlation between neuron coverage, adversarial robustness, or attack effectiveness.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • CIS
    Members: Free
    IEEE Members: Free
    Non-members: Free