Vis enkel innførsel

dc.contributor.authorZhang, Zhi-Yuan
dc.contributor.authorRen, Hao
dc.contributor.authorHe, Zhenli
dc.contributor.authorZhou, Wei
dc.contributor.authorLiu, Di
dc.date.accessioned2024-01-09T09:56:23Z
dc.date.available2024-01-09T09:56:23Z
dc.date.created2023-08-30T12:56:08Z
dc.date.issued2023
dc.identifier.citationFuture Generation Computer Systems. 2023, 149 136-148.en_US
dc.identifier.issn0167-739X
dc.identifier.urihttps://hdl.handle.net/11250/3110536
dc.description.abstractRecent works reveal that re-calibrating intermediate activation of adversarial examples can improve the adversarial robustness of CNN models. The state of the arts exploit this feature at the channel level to help CNN models defend adversarial attacks, where each intermediate activation is uniformly scaled by a factor. However, we conduct a more fine-grained analysis on intermediate activation and observe that adversarial examples only change a portion of elements within an activation. This observation motivates us to investigate a new method to re-calibrate intermediate activation of CNNs to improve robustness. Instead of uniformly scaling each activation, we individually adjust each element within an activation and thus propose Element-Wise Activation Scaling, dubbed EWAS, to improve CNNs’ adversarial robustness. EWAS is a simple yet very effective method in enhancing robustness. Experimental results on ResNet-18 and WideResNet with CIFAR10 and SVHN show that EWAS significantly improves the robustness accuracy. Especially for ResNet18 on CIFAR10, EWAS increases the adversarial accuracy by 37.65% to 82.35% against C&W attack. The code and trained models are available at https://github.com/ieslab-ynu/EWAS.en_US
dc.language.isoengen_US
dc.publisherElsevieren_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleImproving robustness of convolutional neural networks using element-wise activation scalingen_US
dc.title.alternativeImproving robustness of convolutional neural networks using element-wise activation scalingen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionacceptedVersionen_US
dc.source.pagenumber136-148en_US
dc.source.volume149en_US
dc.source.journalFuture Generation Computer Systemsen_US
dc.identifier.doi10.1016/j.future.2023.07.013
dc.identifier.cristin2170922
cristin.ispublishedtrue
cristin.fulltextpostprint
cristin.qualitycode2


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal