Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Quantify Uncertainty and Hallucination in Foundation Models: The Next Frontier in Reliable AI

Training-Free Bayesianization for Low-Rank Adapters of Large Language Models

Haizhou Shi · Yibin Wang · Ligong Han · Huan Zhang · Hao Wang

Keywords: [ Bayesian Inference ] [ Large Language Models ] [ Uncertainty Estimation ]


Abstract:

Estimating the uncertainty of responses of Large Language Models (LLMs) remains a critical challenge. While recent Bayesian methods have demonstrated effectiveness in quantifying uncertainty through low-rank weight updates, they typically require complex fine-tuning or post-training procedures. In this paper, we propose Training-Free Bayesianization (TFB), a novel framework that efficiently transforms existing off-the-shelf trained low-rank adapters into Bayesian ones without additional training. TFB systematically searches for the maximally acceptable level of variance in the weight posterior, constrained within a family of low-rank isotropic Gaussian distributions. We theoretically demonstrate that under mild conditions, this search process is equivalent to KL-regularized variational optimization, a generalized form of variational inference. Through comprehensive experiments, we show that TFB achieves superior uncertainty estimation and generalization compared to existing methods while eliminating the need for complex training procedures.

Chat is not available.