Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Generative Model in Machine Learning: Theory, Principle and Efficacy

Stable Consistency Tuning: Understanding and Improving Consistency Models

Fu-Yun Wang · Zhengyang Geng · Hongsheng Li

Keywords: [ Consistency models ] [ Diffusion models ]


Abstract:

Diffusion models achieve high-quality generation but suffer from slow sampling due to their iterative denoising process. Consistency models offer a faster alternative with competitive performance, trained via consistency distillation from pretrained diffusion models or directly from raw data. We introduce a novel framework interpreting consistency models through a Markov Decision Process (MDP), framing their training as value estimation via Temporal Difference (TD) Learning. This perspective reveals limitations in existing training strategies. Building on Easy Consistency Tuning (ECT), we propose Stable Consistency Tuning (SCT), which enhances variance reduction using the score identity. SCT significantly improves performance on CIFAR-10 and ImageNet-64. Code and weights will be released.

Chat is not available.