Vis enkel innførsel

dc.contributor.authorRezgui, Zohra
dc.contributor.authorBassit, Amina
dc.contributor.authorVeldhuis, Raymond Nicolaas Johan
dc.date.accessioned2023-03-06T14:55:27Z
dc.date.available2023-03-06T14:55:27Z
dc.date.created2022-09-26T09:03:46Z
dc.date.issued2022
dc.identifier.citationIET Biometrics. 2022, .en_US
dc.identifier.issn2047-4938
dc.identifier.urihttps://hdl.handle.net/11250/3056174
dc.description.abstractMost deep learning-based image classification models are vulnerable to adversarial attacks that introduce imperceptible changes to the input images for the purpose of model misclassification. It has been demonstrated that these attacks, targeting a specific model, are transferable among models performing the same task. However, models performing different tasks but sharing the same input space and model architecture were never considered in the transferability scenarios presented in the literature. In this paper, this phenomenon was analysed in the context of VGG16-based and ResNet50-based biometric classifiers. The authors investigate the impact of two white-box attacks on a gender classifier and contrast a defence method as a countermeasure. Then, using adversarial images generated by the attacks, a pre-trained face recognition classifier is attacked in a black-box fashion. Two verification comparison settings are employed, in which images perturbed with the same and different magnitude of the perturbation are compared. The authors’ results indicate transferability in the fixed perturbation setting for a Fast Gradient Sign Method attack and non-transferability in a pixel-guided denoiser attack setting. The interpretation of this non-transferability can support the use of fast and train-free adversarial attacks targeting soft biometric classifiers as means to achieve soft biometric privacy protection while maintaining facial identity as utility.en_US
dc.language.isoengen_US
dc.publisherWiley Open Accessen_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleTransferability analysis of adversarial attacks on gender classification to face recognition: Fixed and variable attack perturbationen_US
dc.title.alternativeTransferability analysis of adversarial attacks on gender classification to face recognition: Fixed and variable attack perturbationen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.source.pagenumber13en_US
dc.source.journalIET Biometricsen_US
dc.identifier.doi10.1049/bme2.12082
dc.identifier.cristin2055219
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal