Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The 4th Workshop on practical ML for Developing Countries: learning under limited/low resource settings

Towards Federated Learning Under Resource Constraints via Layer-wise Training and Depth Dropout

Pengfei Guo · Warren Morningstar · Raviteja Vemulapalli · Karan Singhal · Vishal Patel · Philip Mansfield


Abstract: Large machine learning models trained on diverse data have been successful across many applications. Federated learning enables training on private data that may otherwise be inaccessible, such as domain-specific datasets decentralized across many clients. However, federated learning can be difficult to scale to large models when clients have limited resources. This challenge often results in a trade-off between model size and data accessibility. To mitigate this issue and facilitate training of large machine learning models on edge devices, we introduce a simple yet effective strategy, Federated Layer-wise Learning, to simultaneously reduce per-client memory, computation, and communication costs.We train a machine learning model in a layer-wise fashion, allowing each client to train just a single layer, thereby considerably reducing the computational burden with minimal performance degradation. In addition, we introduce Federated Depth Dropout, a technique that randomly drops frozen layers during training, to further reduce resource usage. Coupling these two designs enables us to effectively train large models on edge devices. Specifically, we reduce training memory usage by 5$\times$ or more, and demonstrate that performance in downstream tasks is comparable to conventional federated self-supervised representation learning.

Chat is not available.