Privacy-Preserving Distributed Learning with Nonsmooth Objective Functions
Chapter
Accepted version

Åpne
Permanent lenke
https://hdl.handle.net/11250/2984077Utgivelsesdato
2021Metadata
Vis full innførselSamlinger
Originalversjon
10.1109/IEEECONF51394.2020.9443287Sammendrag
This paper develops a fully distributed differentially-private learning algorithm based on the alternating direction method of multipliers (ADMM) to solve nonsmooth optimization problems. We employ an approximation of the augmented Lagrangian to handle nonsmooth objective functions. Furthermore, we perturb the primal update at each agent with a time-varying Gaussian noise with decreasing variance to provide zero-concentrated differential privacy. The developed algorithm has competitive privacy-accuracy trade-off and applies to nonsmooth and non necessarily strongly convex problems. Convergence and privacy-preserving properties are confirmed via both theoretical analysis and simulations.