Decentralized Systems for Machine Learning
Mar 19, 2025·
·
0 min read

Rishi Sharma
Abstract
This talk explores innovative approaches to overcoming the central challenges in decentralized AI systems. Traditional decentralized learning faces seemingly inherent tradeoffs between privacy, model utility, communication efficiency, and resilience to system heterogeneity. We present two complementary systems—Shatter and DivShare—that challenge these constraints through novel architectural designs. Shatter introduces virtualization and model chunking to enhance privacy without sacrificing model quality, eliminating the need for performance-degrading noise. DivShare leverages model fragmentation to efficiently handle communication stragglers, enabling heterogeneous networks to train effectively. Through empirical evaluation on standard datasets, we demonstrate that these approaches not only break traditional tradeoffs but significantly outperform existing solutions across multiple dimensions. This research opens new possibilities for practical, privacy-preserving decentralized AI in real-world settings where data remains distributed and network conditions vary widely.
Event
MIT Media Lab Seminar
Location
MIT Media Lab
75 Amherst Street, Cambridge, MA 02139