Skip to yearly menu bar Skip to main content


Virtual presentation / top 25% paper

Multi-domain image generation and translation with identifiability guarantees

Shaoan Xie · Lingjing Kong · Mingming Gong · Kun Zhang

Keywords: [ Generative models ] [ multi-domain image generation ] [ nonlinear ica ] [ identifiability ] [ image translation ]


Abstract:

Multi-domain image generation and unpaired image-to-to-image translation are two important and related computer vision problems. The common technique for the two tasks is the learning of a joint distribution from multiple marginal distributions. However, it is well known that there can be infinitely many joint distributions that can derive the same marginals. Hence, it is necessary to formulate suitable constraints to address this highly ill-posed problem. Inspired by the recent advances in nonlinear Independent Component Analysis (ICA) theory, we propose a new method to learn the joint distribution from the marginals by enforcing a specific type of minimal change across domains. We report one of the first results connecting multi-domain generative models to identifiability and shows why identifiability is essential and how to achieve it theoretically and practically. We apply our method to five multi-domain image generation and six image-to-image translation tasks. The superior performance of our model supports our theory and demonstrates the effectiveness of our method. The training code are available at https://github.com/Mid-Push/i-stylegan.

Chat is not available.