๐ Our paper "Boosting Asynchronous Decentralized Learning with Model Fragmentation" has been accepted for publication in the ACM Web Conference 2025.
Jan 20, 2025ยท
ยท
1 min read

Rishi Sharma
Our work introduces DivShare, a novel algorithm that addresses the critical challenge of communication stragglers in decentralized learning. By fragmenting models into parameter subsets and distributing them randomly, weโve achieved up to 3.9x faster time-to-accuracy compared to state-of-the-art methods like AD-PSGD on the CIFAR-10 dataset. Most significantly, we provide the first formal proof of convergence for a decentralized learning algorithm that accounts for asynchronous communication with delays.
Iโll be presenting these findings at the conference in Sydney in April. Link to the paper.