Privacy-Preserving Distributed Learning with Nonsmooth Objective Functions
MetadataVis full innførsel
This paper develops a fully distributed differentially-private learning algorithm based on the alternating direction method of multipliers (ADMM) to solve nonsmooth optimization problems. We employ an approximation of the augmented Lagrangian to handle nonsmooth objective functions. Furthermore, we perturb the primal update at each agent with a time-varying Gaussian noise with decreasing variance to provide zero-concentrated differential privacy. The developed algorithm has competitive privacy-accuracy trade-off and applies to nonsmooth and non necessarily strongly convex problems. Convergence and privacy-preserving properties are confirmed via both theoretical analysis and simulations.