Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Mathematical and Empirical Understanding of Foundation Models (ME-FoMo)

Aligning Foundation Models for Language with Preferences through $f$-divergence Minimization

Dongyoung Go · Tomek Korbak · Germàn Kruszewski · Jos Rozen · Nahyeon Ryu · Marc Dymetman

Keywords: [ Reinforcement Learning with KL penalties ] [ language model alignment ] [ f-divergence ] [ preference modeling ] [ Reinforcement Learning from Human Feedback (RLHF) ] [ Generation with Distributional Control (GDC) ] [ nlp ]


Abstract: Aligning language models with preferences can be posed as approximating a target distribution representing some desired behavior. Existing approaches differ both in the functional form of the target distribution and the algorithm used to approximate it. For instance, Reinforcement Learning from Human Feedback (RLHF) corresponds to minimizing a reverse KL from an implicit target distribution arising from a KL penalty in the objective. On the other hand, Generative Distributional Control (GDC) has an explicit target distribution and minimizes a forward KL from it using the Distributional Policy Gradient (DPG) algorithm. In this paper, we propose a new approach, $f$-DPG, which allows the use of any $f$-divergence to approximate any target distribution. $f$-DPG unifies both frameworks (RLHF, GDC) and the approximation methods (DPG, RL with KL penalties). We show the practical benefits of various choices of divergence objectives and demonstrate that there is no universally optimal objective but that different divergences are good for approximating different targets.

Chat is not available.