Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Learning topology-preserving data representations

Ilya Trofimov · Daniil Cherniavskii · Eduard Tulchinskii · Nikita Balabin · Evgeny Burnaev · Serguei Barannikov

MH1-2-3-4 #83

Keywords: [ Deep Learning and representational learning ] [ Topological Data Analysis ] [ dimensionality reduction ] [ representation learning ]


Abstract:

We propose a method for learning topology-preserving data representations (dimensionality reduction). The method aims to provide topological similarity between the data manifold and its latent representation via enforcing the similarity in topological features (clusters, loops, 2D voids, etc.) and their localization. The core of the method is the minimization of the Representation Topology Divergence (RTD) between original high-dimensional data and low-dimensional representation in latent space. RTD minimization provides closeness in topological features with strong theoretical guarantees. We develop a scheme for RTD differentiation and apply it as a loss term for the autoencoder. The proposed method "RTD-AE" better preserves the global structure and topology of the data manifold than state-of-the-art competitors as measured by linear correlation, triplet distance ranking accuracy, and Wasserstein distance between persistence barcodes.

Chat is not available.