Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Neural Network Weights as a New Data Modality

Learning on LoRAs: GL-Equivariant Processing of Low-Rank Weight Spaces for Large Finetuned Models

Theo Putterman · Derek Lim · Yoav Gelberg · Stefanie Jegelka · Haggai Maron

Keywords: [ Foundation models ] [ Equivariance ] [ Finetuning ] [ Weight-space learning ] [ LoRA ]


Abstract:

Low-rank adaptations (LoRAs) have revolutionized the finetuning of large foundation models, enabling efficient adaptation even with limited computational resources. The resulting proliferation of LoRAs presents exciting opportunities for applying machine learning techniques that take these low-rank weights themselves as inputs. In this paper, we investigate the potential of Learning on LoRAs (LoL), a new paradigm where machine learning models learn and make predictions on datasets of LoRA weights. We first identify the inherent parameter symmetries of low rank decompositions of weights, which differ significantly from the parameter symmetries of standard neural networks. To efficiently process LoRA weights, we develop several symmetry-aware invariant or equivariant LoL models, using tools such as canonicalization, invariant featurization, and equivariant layers. In diverse experiments, we show that our LoL architectures can process LoRA weights to predict CLIP score, finetuning data attributes, finetuning data membership, and accuracy on downstream tasks. As an example of the utility of LoL, our LoL models can accurately estimate CLIP Score of diffusion models and ARC-C test accuracy of LLMs over 50,000 times faster than standard evaluation. As part of this work, we finetuned and will release a dataset of over ten-thousand text-to-image diffusion model and language model LoRAs.

Chat is not available.