Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Editing models with task arithmetic

Gabriel Ilharco · Marco Tulio Ribeiro · Mitchell Wortsman · Ludwig Schmidt · Hannaneh Hajishirzi · Ali Farhadi

MH1-2-3-4 #39

Keywords: [ Deep Learning and representational learning ] [ weight interpolation ] [ pre-trained models ] [ model patching ] [ merging models ] [ Fine-tuning ] [ model editing ] [ transfer learning ]


Abstract:

Changing how pre-trained models behave---e.g., improving their performance on a downstream task or mitigating biases learned during pre-training---is a common practice when developing machine learning systems. In this work, we propose a new paradigm for steering the behavior of neural networks, centered around task vectors. A task vector specifies a direction in the weight space of a pre-trained model, such that movement in that direction improves performance on the task. We build task vectors by subtracting the weights of a pre-trained model from the weights of the same model after fine-tuning on a task. We show that these task vectors can be modified and combined together through arithmetic operations such as negation and addition, and the behavior of the resulting model is steered accordingly. Moreover, task vectors can be added together to improve performance on multiple tasks at once. Finally, when tasks are linked by an analogy relationship of the form ``A is to B as C is to D", combining task vectors from three of the tasks can improve performance on the fourth, even when no data from the fourth task is used for training.

Chat is not available.