Skip to yearly menu bar Skip to main content


Poster

Sliced Wasserstein Auto-Encoders

Soheil Kolouri · Phillip Pope · Charles Martin · Gustavo Rohde

Keywords: [ unsupervised learning ] [ optimal transport ] [ wasserstein distances ] [ auto-encoders ]

[ ]
[ PDF
2019 Poster

Abstract:

In this paper we use the geometric properties of the optimal transport (OT) problem and the Wasserstein distances to define a prior distribution for the latent space of an auto-encoder. We introduce Sliced-Wasserstein Auto-Encoders (SWAE), that enable one to shape the distribution of the latent space into any samplable probability distribution without the need for training an adversarial network or having a likelihood function specified. In short, we regularize the auto-encoder loss with the sliced-Wasserstein distance between the distribution of the encoded training samples and a samplable prior distribution. We show that the proposed formulation has an efficient numerical solution that provides similar capabilities to Wasserstein Auto-Encoders (WAE) and Variational Auto-Encoders (VAE), while benefiting from an embarrassingly simple implementation. We provide extensive error analysis for our algorithm, and show its merits on three benchmark datasets.

Chat is not available.