Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Sparsity in LLMs (SLLM): Deep Dive into Mixture of Experts, Quantization, Hardware, and Inference

LoRA Without Forgetting: Freezing and Sparse Masking for Low-Rank Adaptation

Juzheng Zhang · Jiacheng You · Ashwinee Panda · Tom Goldstein


Abstract: Parameter-efficient fine-tuning (PEFT) methods for large language models (LLMs), such as LoRA, alleviate the computational burden but still introduce redundant trainable parameters and remain susceptible to knowledge degradation when fine-tuned sequentially.In this work, we propose LoRA without Forgetting (LoRAF), a novel PEFT method that reduces trainable parameters while mitigating catastrophic forgetting. LoRAF achieves this by freezing the low-rank matrix $A$ and applying sparse, task-specific masks to the low-rank matrix $B$. To prevent interference between tasks, LoRAF enforces non-overlapping masks across different tasks.We evaluate LoRAF on natural language understanding and mathematical reasoning tasks using Mistral-7B. Our results demonstrate that LoRAF outperforms full fine-tuning (FFT) and LoRA while using 95\% fewer trainable parameters than LoRA. In a sequential learning setting, LoRAF significantly outperforms both LoRA and FFT in mitigating catastrophic forgetting.

Chat is not available.