Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Neural Network Weights as a New Data Modality

Model Diffusion for Certifiable Few-shot Transfer Learning

Fady Rezk · Royson Lee · Henry Gouk · Timothy Hospedales · Minyoung Kim

Keywords: [ personalization ] [ transfer learning ] [ few-shot learning ] [ cross-task transfer ] [ pac-bayes ] [ vlm ] [ llm ]


Abstract:

In modern large-scale deep learning, a prevalent and effective workflow for solving low-data problems is adapting powerful pre-trained foundation models (FMs) to new tasks via parameter-efficient fine-tuning (PEFT). However, while empirically effective, the resulting solutions lack generalisation guarantees to certify their accuracy - which may be required for ethical or legal reasons prior to deployment in high-importance applications. In this paper we develop a novel transfer learning approach that is designed to facilitate non-vacuous learning theoretic generalisation guarantees for downstream tasks, even in the low-shot regime. Specifically, we first use upstream tasks to train a distribution over PEFT parameters. We then learn the downstream task by a sample-and-evaluate procedure -- sampling plausible PEFTs from the trained diffusion model and selecting the one with the highest likelihood on the downstream data. Crucially, this confines our model hypothesis to a finite set of PEFT samples. In contrast to learning in the typical continuous hypothesis spaces of neural network weights, this facilitates tighter risk certificates. We instantiate our bound and show non-trivial generalization guarantees compared to existing learning approaches which lead to vacuous bounds in the low-shot regime.

Chat is not available.