Explainable Visualization for Morphing Attack Detection
Peer reviewed, Journal article
Accepted version
View/ Open
Date
2022Metadata
Show full item recordCollections
Abstract
Detecting morphed face images has become critical for maintaining trust in automated facial biometric verification systems. It is well demonstrated that better biometric performance of the Face Recognition System (FRS) results in higher vulnerability to face morphing attacks. Morphing can be understood as a technique to combine two or more look-alike facial images corresponding to the attacker and an accomplice, who could apply for a valid passport by exploiting the accomplice’s identity. Morphing Attack Detection (MAD), with the help of Convolutional Neural Networks (CNN), has demonstrated good performance in detecting morphed images. However, they lack transparency, and it is unclear how they differentiate between bona fide and morphed facial images. As a result, this phenomenon needs careful consideration for safety and security-related applications. This paper will explore Layer-wise Relevance Propagation (LRP) to determine the most relevant features. We fine-tune a VGG pre-trained network for face morphing attack detection and LRP is then used to investigate the decision-making processes to understand what input pixels take part in the attack detection. This paper shows that CNN considers only a small part of the image, usually around the eyes, nose, and mouth.