Vis enkel innførsel

dc.contributor.authorPflug, Anika
dc.date.accessioned2015-06-05T06:08:35Z
dc.date.available2015-06-05T06:08:35Z
dc.date.issued2015-06-05
dc.identifier.isbn978-82-8340-007-6
dc.identifier.issn1893-1227
dc.identifier.urihttp://hdl.handle.net/11250/284604
dc.description.abstractThe outer ear is an emerging biometric trait that has drawn the attention of the research community for more than a decade. The unique structure of the auricle is long known among forensic scientists and has been used for the identification of suspects in many cases. The next logical step towards a broader application of ear biometrics is to create automatic ear recognition systems. This work focuses on the usage of texture (2D) and depth (3D) data for improving the performance of ear recognition. We compare ear recognition systems using either texture or depth data with respect to segmentation and recognition accuracy, but also in the context of robustness to pose variations, signal degradation and throughput. We propose a novel segmentation method for ears where texture and surface information are fused in the feature space. We also provide a reliable method for geometric normalization of ear images and present a comparative study of different texture description method and the impact of their parametrization and the capture settings of a dataset. In this context, we propose a fusion scheme, were fixed length spectral histograms are created from texture and surface information. The proposed ear recognition system is integrated into a demonstrator system as a part of a novel identification system for forensics. The system is benchmarked against a challenging dataset that comprises of 3D head models, mugshots and CCTV videos from four different perspectives. As a result of this work, we outline limitations of current ear recognition systems and provide possible direction for future applied research. Having a complete ear recognition system with optimized parameters, we measure impact of image quality on the accuracy during ear segmentation and ear recognition. These experiments focus on noise, blur and compression artefacts and are hence only conducted on 2D data. We show that blur has a smaller impact on the system performance than noise. In scenarios where we work with compressed images, we show that the performance can be improved by optimizing the size of local image patches for feature extraction and the size of the compression artefacts. This thesis is concluded by work on automatic classification of ears for the purpose of narrowing the search space in large datasets. We show that classification of ears using texture descriptors is possible. Furthermore, we show that the class label is influenced by the skin tone, but also by the capture settings of the dataset. In further work, we propose a method for the extraction of binary feature vectors of texture descriptors and their application in a 2-stage search system. We show that our 2-stage system improves the recognition performance, because it removes images from the search space that would otherwise have caused recognition errors in the second stage.nb_NO
dc.language.isoengnb_NO
dc.relation.ispartofseriesDoktorgradsavhandlinger ved Høgskolen i Gjøvik;2/2015
dc.subjectbiometricnb_NO
dc.subjectear recognitionnb_NO
dc.titleEar Recognition: Biometric Identification using 2- and 3-Dimensional Images of Human Earsnb_NO
dc.typeDoctoral thesisnb_NO
dc.subject.nsiVDP::Mathematics and natural science: 400::Information and communication science: 420::Security and vulnerability: 424nb_NO
dc.source.pagenumber205nb_NO


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel