Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Generative Model in Machine Learning: Theory, Principle and Efficacy

Better Sampling as a Path to Stronger VAEs

Aziz Shameem · Amit Sethi

Keywords: [ variational autoencoders ] [ sampling ] [ generative modelling ] [ langevin monte carlo ]


Abstract:

Variational autoencoders (VAEs) and denoising diffusion probabilistic models (DDPMs) are two leading generative frameworks, yet DDPMs have significantly outperformed VAEs in high-fidelity image synthesis. This work investigates the latent space structure of VAEs to understand their limitations and explores methods to enhance their generative capabilities. Through systematic experiments, we analyze how the interplay between reconstruction loss and KL divergence affects VAE performance and propose alternative sampling strategies to improve sample quality. Our results demonstrate that intelligent sampling significantly enhances VAE outputs while retaining their advantage of faster generation compared to diffusion models. Additionally, the introduced probabilistic approach ensures generated samples do not overlap with training data, addressing a key issue in diffusion models. These findings provide insights into optimizing VAEs for more effective and controlled generative modeling.

Chat is not available.