Improving robustness of convolutional neural networks using element-wise activation scaling
Peer reviewed, Journal article
Accepted version
Åpne
Permanent lenke
https://hdl.handle.net/11250/3110536Utgivelsesdato
2023Metadata
Vis full innførselSamlinger
Sammendrag
Recent works reveal that re-calibrating intermediate activation of adversarial examples can improve the adversarial robustness of CNN models. The state of the arts exploit this feature at the channel level to help CNN models defend adversarial attacks, where each intermediate activation is uniformly scaled by a factor. However, we conduct a more fine-grained analysis on intermediate activation and observe that adversarial examples only change a portion of elements within an activation. This observation motivates us to investigate a new method to re-calibrate intermediate activation of CNNs to improve robustness. Instead of uniformly scaling each activation, we individually adjust each element within an activation and thus propose Element-Wise Activation Scaling, dubbed EWAS, to improve CNNs’ adversarial robustness. EWAS is a simple yet very effective method in enhancing robustness. Experimental results on ResNet-18 and WideResNet with CIFAR10 and SVHN show that EWAS significantly improves the robustness accuracy. Especially for ResNet18 on CIFAR10, EWAS increases the adversarial accuracy by 37.65% to 82.35% against C&W attack. The code and trained models are available at https://github.com/ieslab-ynu/EWAS.