Back to news
AI & Machine Learning
Apr 29, 2026

MIT researchers enhance federated learning for AI on resource-limited devices

Apr 29, 2026
AI Summary

Researchers at MIT have developed a new method that improves the efficiency of federated learning by 81%, making it feasible for resource-constrained devices like smartwatches and sensors to deploy AI models securely. This advancement could significantly impact fields requiring strict privacy standards, such as healthcare and finance.

MIT researchers enhance federated learning for AI on resource-limited devices
  • MIT researchers have created a method that accelerates federated learning, improving efficiency by 81%.
  • Federated learning allows devices to train AI models using local data while keeping that data secure on the device.
  • The new technique addresses limitations in memory and connectivity among heterogeneous devices, which often struggle with training and data transfer.
  • The Federated Tiny Training Engine (FTTE) framework reduces memory and communication overhead by sending only a subset of model parameters to devices.
  • FTTE employs an asynchronous update process, allowing the server to accumulate updates without waiting for all devices, and weights updates based on their recency.
  • The method has shown to reduce on-device memory overhead by 80% and communication payload by 69%, while maintaining near-accuracy levels of traditional methods.
  • The researchers tested FTTE in simulations and on real devices, demonstrating its scalability and effectiveness in diverse settings.
  • Future research aims to enhance personalized AI model performance on individual devices and conduct larger experiments on actual hardware.
  • The project received partial funding from a Takeda PhD Fellowship.
privacyai traininghealth carefinanceefficiency