Decentralized Graph Federated Multitask Learning for Streaming Data
Chapter
Accepted version
Permanent lenke
https://hdl.handle.net/11250/3058758Utgivelsesdato
2022Metadata
Vis full innførselSamlinger
Originalversjon
10.1109/CISS53076.2022.9751160Sammendrag
In federated learning (FL), multiple clients connected to a single server train a global model based on locally stored data without revealing their data to the server or other clients. Nonetheless, the current FL architecture is highly vulnerable to communication failures and computational bottlenecks at the server. In response, a recent work proposed a multi-server federated architecture, namely, a graph federated learning architecture (GFL). However, existing work assumes a fixed amount of data at clients and the training of a single global model. This paper proposes a decentralized online multitask learning algorithm based on GFL (O-GFML). Clients update their local models using continuous streaming data while clients and multiple servers can train different but related models simul-taneously. Furthermore, to enhance the communication efficiency of O-GFML, we develop a partial-sharing-based O-GFML (PSO-GFML). The PSO-GFML allows participating clients to exchange only a portion of model parameters with their respective servers during a global iteration, while non-participating clients update their local models if they have access to new data. In the context of kernel regression, we show the mean convergence of the PSO-GFML. Experimental results show that PSO-GFML can achieve competitive performance with a considerably lower communication overhead than O-GFML.