Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Sparsity in LLMs (SLLM): Deep Dive into Mixture of Experts, Quantization, Hardware, and Inference

Scalable Continual Learning: Adaptive MoEs for Expanding Task Sets

Adrian Candocia · Omer Inan · Raaghav Agarwal · Aamod Varma · Mark Davenport


Abstract:

Recently, the Mixture-of-Experts (MoE) model has been shown to be an effective strategy for continual learning because it can adapt to a range of tasks by employing an array of "experts'' that each specialize on certain tasks. However, the MoE model lacks the ability to adapt to completely new tasks, particularly as the number of tasks grows to be large. In this work we develop a framework for expanding the number of experts as needed when new tasks arise. We also provide simulations demonstrating that our approach can effectively handle a growing number of tasks.

Chat is not available.