Vis enkel innførsel

dc.contributor.authorHassanpour, Ahmad
dc.contributor.authorMoradikia, Majid
dc.contributor.authorYang, Bian
dc.contributor.authorAbdelhadi, Ahmed
dc.contributor.authorBusch, Christoph
dc.contributor.authorFierrez, Julian
dc.date.accessioned2023-03-03T08:01:45Z
dc.date.available2023-03-03T08:01:45Z
dc.date.created2022-05-04T09:01:49Z
dc.date.issued2022
dc.identifier.citationIEEE Access. 2022, 10 24273-24287.en_US
dc.identifier.issn2169-3536
dc.identifier.urihttps://hdl.handle.net/11250/3055595
dc.description.abstractEnhancing the privacy of machine learning (ML) algorithms has become crucial with the presence of different types of attacks on AI applications. Continual learning (CL) is a branch of ML with the aim of learning a set of knowledge sequentially and continuously from a data stream. On the other hand, differential privacy (DP) has been extensively used to enhance the privacy of deep learning (DL) models. However, the task of adding DP to CL would be challenging, because on one hand the DP intrinsically adds some noise that reduce the utility, on the other hand the endless learning procedure of CL is a serious obstacle, resulting in the catastrophic forgetting (CF) of previous samples of ongoing stream. To be able to add DP to CL, we have proposed a methodology by which we cannot only strike a tradeoff between privacy and utility, but also mitigate the CF. The proposed solution presents a set of key features: (1) it guarantees theoretical privacy bounds via enforcing the DP principle; (2) we further incorporate a robust procedure into the proposed DP-CL scheme to hinder the CF; and (3) most importantly, it achieves practical continuous training for a CL process without running out of the available privacy budget. Through extensive empirical evaluation on benchmark datasets and analyses, we validate the efficacy of the proposed solution.en_US
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleDifferential Privacy Preservation in Robust Continual Learningen_US
dc.title.alternativeDifferential Privacy Preservation in Robust Continual Learningen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.source.pagenumber24273-24287en_US
dc.source.volume10en_US
dc.source.journalIEEE Accessen_US
dc.identifier.doi10.1109/ACCESS.2022.3154826
dc.identifier.cristin2021242
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal