Revisiting Ensembling in One-Shot Federated Learning

Abstract

Federated Learning (FL) is an appealing approach to training machine learning models without sharing raw data. However, standard FL algorithms are iterative and thus induce a significant communication cost. One-Shot FL (OFL) trades the iterative exchange of models between clients and the server with a single round of communication, thereby saving substantially on communication costs. Not surprisingly, OFL exhibits a performance gap in terms of accuracy with respect to FL, especially under high data heterogeneity. We introduce Fens, a novel federated ensembling scheme that approaches the accuracy of FL with the communication efficiency of OFL. Learning in Fens proceeds in two phases: first, clients train models locally and send them to the server, similar to OFL; second, clients collaboratively train a lightweight prediction aggregator model using FL. We showcase the effectiveness of Fens through exhaustive experiments spanning several datasets and heterogeneity levels. In the particular case of heterogeneously distributed CIFAR-10 dataset, Fens achieves up to a 26.9% higher accuracy over SOTA OFL, being only 3.1% lower than FL. At the same time, Fens incurs at most 4.3x more communication than OFL, whereas FL is at least 10.9x more communication-intensive than Fens.

Publication
(To appear) In the Advances in Neural Information Processing Systems 38 (2024)
Rishi Sharma
Rishi Sharma
PhD Student at EPFL

Currently exploring research interests in Computer Science.

comments powered by Disqus

Related