Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Generative Model in Machine Learning: Theory, Principle and Efficacy

Image Interpolation with Score-based Riemannian Metrics of Diffusion Models

Shinnosuke Saito · Takashi Matsubara

Keywords: [ pre-trained diffusion models ] [ Riemannian geometry ] [ image interpolation ] [ manifold hypothesis ]


Abstract:

Diffusion models excel in content generation by implicitly learning the data manifold, yet they lack a practical method to leverage this manifold---unlike other deep generative models equipped with latent spaces. This paper introduces a novel framework that treats the data space of pre-trained diffusion models as a Riemannian manifold, with a metric derived from score function. Experiments with MNIST and Stable Diffusion show that this geometry-aware approach yields smoother interpolations than linear or spherical linear interpolation and other methods, demonstrating its potential for improved content generation and editing.

Chat is not available.