Vis enkel innførsel

dc.contributor.advisorBusch, Christoph
dc.contributor.advisorRamachandra, Raghavendra
dc.contributor.authorKhodabakhsh, Ali
dc.date.accessioned2021-06-04T12:05:22Z
dc.date.available2021-06-04T12:05:22Z
dc.date.issued2021
dc.identifier.isbn978-82-326-6101-5
dc.identifier.issn2703-8084
dc.identifier.urihttps://hdl.handle.net/11250/2757964
dc.description.abstractFollowing the introduction of image manipulation tools such as Adobe Photoshop in the early 2000s, the public trust in image authenticity dropped and the need for the development and deployment of image authentication techniques became apparent. Recently, we face a similar situation for video content as photo-realistic video manipulation tools like Deepfake are becoming available and within the reach of the general public as well as bad actors. In human to human communication, face and voice modalities play a crucial role, and not surprisingly, the same modalities are most under attack by forgers. Historically, the task of audiovisual content authentication was the focus of the field of multimedia forensics, with more than 15 years of accumulated literature. Following the increase in the popularity of biometric systems in practice, these systems have also faced similar challenges and felt the need for content authentication. Consequently, the field of presentation attack detection is born to protect biometric systems against fake biometric presentations. Due to the parallel nature of the presentation attack detection problem, defined as protecting a biometric system from presentation attacks, to the audiovisual content authentication problem, defined as protecting the viewer from fake content, the field of biometric presentation attack detection can provide a solid basis for approaching the multimedia authentication problem. The primary objective of this thesis is to address the audiovisual content authentication problem on the face modality by vulnerability assessment and mitigation of detected vulnerabilities with reliance on biometric and presentation attack detection knowledge. To this end, after producing a taxonomy of existing generation techniques, subjective tests are done to assess the vulnerability of viewers to the most prevalent generation techniques with reliance on data collected from the wild. Following this process, the generation techniques the viewers are most susceptible to were identified. The discovered vulnerabilities are then mitigated individually by the introduction of effective detection techniques that outperform existing solutions. Furthermore, the vulnerability of existing general-purpose detection methods was analyzed and it was discovered that these methods show limited generalization capacity when faced with new generation methods. To mitigate this vulnerability, with reliance on an anomaly extraction approach, a generalizable detection method is introduced and empirically evaluated against the state-of-theart methods. Additionally, all the datasets that are collected during the course of this thesis work are made publicly available to stimulate further research on this topic.en_US
dc.language.isoengen_US
dc.publisherNTNUen_US
dc.relation.ispartofseriesDoctoral theses at NTNU;2021:213
dc.relation.haspartPaper 1: A. Khodabakhsh, C. Busch and R. Ramachandra, “A Taxonomy of Audiovisual Fake Multimedia Content Creation Technology,” 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), Miami, FL, 2018, pp. 372-377. © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.relation.haspartPaper 2: A. Khodabakhsh, R. Ramachandra and C. Busch, “Subjective Evaluation of Media Consumer Vulnerability to Fake Audiovisual Content,” 2019 Eleventh International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, 2019, pp. 1-6. © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.relation.haspartPaper 3: A. Khodabakhsh and H. Loiselle, “Action-Independent Generalized Behavioral Identity Descriptors for Look-alike Recognition in Videos,” 2020 International Conference of the Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany, 2020, pp. 151-162.
dc.relation.haspartPaper 4: T. Nielsen, A. Khodabakhsh and C. Busch, “Unit-Selection Based Facial Video Manipulation Detection,” 2020 International Conference of the Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany, 2020, pp. 87-96.
dc.relation.haspartPaper 5: A. Khodabakhsh, R. Ramachandra, K. Raja, P. Wasnik and C. Busch, “Fake Face Detection Methods: Can They Be Generalized?,” 2018 International Conference of the Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany, 2018, pp. 1-11. © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
dc.relation.haspartPaper 6: A. Khodabakhsh and C. Busch, “A Generalizable Deepfake Detector based on Neural Conditional Distribution Modelling,” 2020 International Conference of the Biometrics Special Interest Group (BIOSIG), Darmstadt, Germany, 2020, pp. 191-198.
dc.relation.haspartPaper 7: A. Khodabakhsh, Z. Akhtar, “Unknown Presentation Attack Detection against Rational Attackers,” arXiv preprint arXiv:2010.01592, 2020. (Submitted to IET biometrics)
dc.titleAutomated Authentication of Audiovisual Contents: A Biometric Approachen_US
dc.typeDoctoral thesisen_US
dc.subject.nsiVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550en_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel