On the resilience of online federated learning to model poisoning attacks through partial sharing
Chapter, Conference object
Accepted version

View/ Open
Date
2024Metadata
Show full item recordCollections
- Institutt for elkraftteknikk [2614]
- Publikasjoner fra CRIStin - NTNU [41326]
Original version
ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 10.1109/ICASSP48485.2024.10447497Abstract
We investigate the robustness of the recently introduced partialsharing online federated learning (PSO-Fed) algorithm against model-poisoning attacks. To this end, we analyze the performance of the PSO-Fed algorithm in the presence of Byzantine clients, who may clandestinely corrupt their local models with additive noise before sharing them with the server. PSO-Fed can operate on streaming data and reduce the communication load by allowing each client to exchange parts of its model with the server. Our analysis, considering a linear regression task, reveals that the convergence of PSO-Fed can be ensured in the mean sense, even when confronted with model-poisoning attacks. Our extensive numerical results support our claim and demonstrate that PSO-Fed can mitigate Byzantine attacks more effectively compared with its state-of-the-art competitors. Our simulation results also reveal that, when model-poisoning attacks are present, there exists a non-trivial optimal stepsize for PSO-Fed that minimizes its steady-state mean-square error.