Poster
in
Workshop: Frontiers in Probabilistic Inference: learning meets Sampling
VIPaint: Image Inpainting with Pre-Trained Diffusion Models via Variational Inference
Sakshi Agarwal · Gabriel Hope · Erik Sudderth
Diffusion probabilistic models learn to remove noise added during training, generating novel data (e.g., images) from Gaussian noise through sequential denoising. However, conditioning the generative process on corrupted or masked images is challenging. While various methods have been proposed for inpainting masked images with diffusion priors, they often fail to produce samples from the true conditional distribution, especially for large masked regions. Additionally, many can't be applied to latent diffusion models which have been demonstrated to generate high-quality images, while offering efficiency in model training. We propose a hierarchical variational inference algorithm that optimizes a non-Gaussian Markov approximation of the true diffusion posterior. Our VIPaint method outperforms existing approaches in both plausibility and diversity of imputations and is easily extended to other inverse problems like deblurring and superresolution.