Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Geometric and Topological Representation Learning

Gal Mishne: Visualizing the PHATE of deep neural networks

Gal Mishne


Abstract:

Despite their massive popularity, deep networks are difficult to interpret or analyze. Their design and training is often driven by intuition and their tuning performed via exhaustive hyper-parameter search. More principled evaluations and explorations of deep networks to understand why and how certain neural networks outperform others is critical for faster prototyping, reduced training times and better interpretability. In this talk I present a novel visualization algorithm that reveals the internal geometry of such networks: Multislice PHATE (M-PHATE), the first method designed explicitly to visualize how a neural network's hidden representations of data evolve throughout the course of training. Our approach depends on the construction of a multi-slice graph that captures both the dynamics and the community structure of the hidden units. Our visualization provides a more detailed feedback to the deep learning practitioner beyond simple global measures (validation loss and accuracy), and without the need to access validation data. We demonstrate comparing different neural networks with M-PHATE in two vignettes: continual learning and generalization.