EXPLORING THE CONNECTION BETWEEN NEURON COVERAGE AND ADVERSARIAL ROBUSTNESS IN DNN CLASSIFIERS
William Piat, Jalal Fadili, Frédéric Jurie
-
SPS
IEEE Members: $11.00
Non-members: $15.00
The lack of robustness in neural network classifiers, especially when facing adversarial attacks, is a significant limitation for critical applications. While some researchers have suggested a connection between neuron coverage during training and vulnerability to adversarial perturbations, concrete experimental evidence supporting this claim is lacking. This paper empirically investigates the impact of maximizing neuron coverage during training and assess the effectiveness of adversarial attacks on under-covered neurons. Additionally, we explore the potential of leveraging coverage for designing more efficient attacks. Our experiments reveal no clear correlation between neuron coverage, adversarial robustness, or attack effectiveness.