Skip to yearly menu bar Skip to main content


In-Person Poster presentation / top 25% paper

Learning multi-scale local conditional probability models of images

Zahra Kadkhodaie · Florentin Guth · Stéphane Mallat · Eero Simoncelli

MH1-2-3-4 #114

Keywords: [ Generative models ] [ Image priors ] [ Markov wavelet conditional models ] [ multi-scale score-based image synthesis ] [ super-resolution ] [ denoising ]


Abstract:

Deep neural networks can learn powerful prior probability models for images, as evidenced by the high-quality generations obtained with recent score-based diffusion methods. But the means by which these networks capture complex global statistical structure, apparently without suffering from the curse of dimensionality, remain a mystery. To study this, we incorporate diffusion methods into a multi-scale decomposition, reducing dimensionality by assuming a stationary local Markov model for wavelet coefficients conditioned on coarser-scale coefficients. We instantiate this model using convolutional neural networks (CNNs) with local receptive fields, which enforce both the stationarity and Markov properties. Global structures are captured using a CNN with receptive fields covering the entire (but small) low-pass image. We test this model on a dataset of face images, which are highly non-stationary and contain large-scale geometric structures.Remarkably, denoising, super-resolution, and image synthesis results all demonstrate that these structures can be captured with significantly smaller conditioning neighborhoods than required by a Markov model implemented in the pixel domain. Our results show that score estimation for large complex images can be reduced to low-dimensional Markov conditional models across scales, alleviating the curse of dimensionality.

Chat is not available.