Geometry-grounded Representation Learning and Generative Modeling
Abstract
Real-world data often originates from physical systems that are governed by geometric and physical laws. Yet, most machine learning methods treat this data as abstract vectors, ignoring the underlying structure that could improve both performance and interpretability. Geometry provides powerful guiding principles, from group equivariance to non-Euclidean metrics, that can preserve the symmetries or the structure inherent in data. We believe those geometric tools are well-suited, and perhaps essential, for representation learning and generative modeling. We propose GRaM, a workshop centered on the principle of grounding in geometry, which we define as: An approach is geometrically grounded if it respects the geometric structure of the problem domain and supports geometric reasoning. This year, we aim to explore the relevance of geometric methods, particularly in the context of large models, focusing on the theme of scale and simplicity. We seek to understand when geometric grounding remains necessary, how to effectively scale geometric approaches, and when geometric constraints can be relaxed in favor of simpler alternatives.