Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Geometrically regularized autoencoders for non-Euclidean data

Cheongjae Jang · Yonghyeon Lee · Yung-Kyun Noh · Frank Chongwoo Park

MH1-2-3-4 #65

Keywords: [ Deep Learning and representational learning ] [ autoencoders ] [ non-Euclidean data ] [ score estimation ] [ riemannian geometry ] [ regularization ]


Abstract:

Regularization is almost {\it de rigueur} when designing autoencoders that are sparse and robust to noise. Given the recent surge of interest in machine learning problems involving non-Euclidean data, in this paper we address the regularization of autoencoders on curved spaces. We show that by ignoring the underlying geometry of the data and applying standard vector space regularization techniques, autoencoder performance can be severely degraded, or worse, training can fail to converge. Assuming that both the data space and latent space can be modeled as Riemannian manifolds, we show how to construct regularization terms in a coordinate-invariant way, and develop geometric generalizations of the denoising autoencoder and reconstruction contractive autoencoder such that the essential properties that enable the estimation of the derivative of the log-probability density are preserved. Drawing upon various non-Euclidean data sets, we show that our geometric autoencoder regularization techniques can have important performance advantages over vector-spaced methods while avoiding other breakdowns that can result from failing to account for the underlying geometry.

Chat is not available.