Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions

Fine-Tuning Pretrained Models with NVIB for Improved Generalisation

Fabio Fehr · Alina Elena Baia · Xiaoguang Chang · Andrei Catalin Coman · Karl El Hajal · Dina Zein · Shashi Kumar · Juan Pablo Zuluaga Gomez · Andrea Cavallaro · Damien Teney · James Henderson

Keywords: [ Transformers ] [ Nonparametric Variational Information Bottleneck ] [ Fine-tuning ] [ Out-of-domain generalisation ] [ Regularisation ]


Abstract:

Fine-tuned pretrained attention-based models often struggle with generalisation, leading to poor performance on tasks like out-of-domain transfer, distribution shifts, and few-shot learning. This limitation is prevalent across modalities such as speech, text, graphs, and vision. Nonparametric Variational Information Bottleneck (NVIB) is an attention-based information-theoretic regulariser applicable to pretrained models that has been shown to improve generalisation. However, prior work has applied NVIB only to the text modality and without fine-tuning. We investigate whether NVIB’s ability to remove information from pretrained embeddings helps the model avoid spurious correlations with noisy and superficial features during fine-tuning. We are the first to integrate NVIB regularisation during fine-tuning across multiple diverse models and modalities. This required modifications to the architecture which enhance adaptability and stability during fine-tuning and simplify the evaluation. We found improved out-of-distribution generalisation in: speech quality assessment and language identification, text with induced attention sparsity, graph-based link prediction, and few-shot image classification.

Chat is not available.