Vis enkel innførsel

dc.contributor.authorTerhorst, Philipp
dc.contributor.authorPedersen, Marius
dc.contributor.authorBylappa Raja, Kiran
dc.date.accessioned2024-06-21T11:30:18Z
dc.date.available2024-06-21T11:30:18Z
dc.date.created2024-06-05T10:01:01Z
dc.date.issued2024
dc.identifier.issn2637-6415
dc.identifier.urihttps://hdl.handle.net/11250/3135304
dc.description.abstractIn recent years, image and video manipulations with Deepfake have become a severe concern for security and society. Many detection models and datasets have been proposed to detect Deepfake data reliably. However, there is an increased concern that these models and training databases might be biased and, thus, cause Deepfake detectors to fail. In this work, we investigate factors causing biased detection in public Deepfake datasets by (a) creating large-scale demographic and non-demographic attribute annotations with 47 different attributes for five popular Deepfake datasets and (b) comprehensively analysing attributes resulting in AI-bias of three state-of-the-art Deepfake detection backbone models on these datasets. The analysis shows how various attributes influence a large variety of distinctive attributes (from over 65M labels) on the detection performance which includes demographic (age, gender, ethnicity) and non-demographic (hair, skin, accessories, etc.) attributes. The results examined datasets show limited diversity and, more importantly, show that the utilised Deepfake detection backbone models are strongly affected by investigated attributes making them not fair across attributes. The Deepfake detection backbone methods trained on such imbalanced/biased datasets result in incorrect detection results leading to generalisability, fairness, and security issues. Our findings and annotated datasets will guide future research to evaluate and mitigate bias in Deepfake detection techniques. The annotated datasets and the corresponding code are publicly available. The code link is: https://github.com/xuyingzhongguo/DeepFakeAnnotations .en_US
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/deed.no*
dc.titleAnalyzing Fairness in Deepfake Detection With Massively Annotated Databasesen_US
dc.title.alternativeAnalyzing Fairness in Deepfake Detection With Massively Annotated Databasesen_US
dc.typeJournal articleen_US
dc.typePeer revieweden_US
dc.description.versionpublishedVersionen_US
dc.source.pagenumber93-106en_US
dc.source.volume5en_US
dc.source.journalIEEE Transactions on Technology and Societyen_US
dc.source.issue1en_US
dc.identifier.doi10.1109/TTS.2024.3365421
dc.identifier.cristin2273575
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal