Skip to yearly menu bar Skip to main content


Poster

$\infty$-Diff: Infinite Resolution Diffusion with Subsampled Mollified States

Sam Bond-Taylor · Chris G Willcocks

Halle B #272
[ ] [ Project Page ]
Thu 9 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract: This paper introduces $\infty$-Diff, a generative diffusion model defined in an infinite-dimensional Hilbert space, which can model infinite resolution data. By training on randomly sampled subsets of coordinates and denoising content only at those locations, we learn a continuous function for arbitrary resolution sampling. Unlike prior neural field-based infinite-dimensional models, which use point-wise functions requiring latent compression, our method employs non-local integral operators to map between Hilbert spaces, allowing spatial context aggregation. This is achieved with an efficient multi-scale function-space architecture that operates directly on raw sparse coordinates, coupled with a mollified diffusion process that smooths out irregularities. Through experiments on high-resolution datasets, we found that even at an $8\times$ subsampling rate, our model retains high-quality diffusion. This leads to significant run-time and memory savings, delivers samples with lower FID scores, and scales beyond the training resolution while retaining detail.

Live content is unavailable. Log in and register to view live content