Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Neural Network Weights as a New Data Modality

Compressive Meta-Learning

Daniel Mas Montserrat · David Bonet · Maria Perera · Xavier GirĂł-i-Nieto · Alexander Ioannidis

Keywords: [ Data Summarization ] [ Neural Networks ] [ Meta-Learning ] [ Compressive Learning ] [ Differential Privacy ]


Abstract:

The rapid expansion in the size of new datasets has created a need for fast and efficient parameter-learning techniques. Compressive learning is a framework that enables efficient processing by using random, nonlinear features to project large-scale databases onto compact, information-preserving representations whose dimensionality is independent of the number of samples and can be easily stored, transferred, and processed. These database-level summaries are then used to decode the model weights that capture essential properties of the data distribution without requiring access to the original samples, offering an efficient and privacy-friendly learning framework. However, both the encoding and decoding techniques are typically randomized and data-independent, failing to exploit the underlying structure of the data. In this work, we propose a framework that meta-learns both the encoding and decoding stages of compressive learning methods by compressing representations of the weight space with neural networks, providing faster and more accurate systems than the current state-of-the-art approaches. To demonstrate the potential of the presented Compressive Meta-Learning framework, we explore multiple applications—including autoencoders, neural network-based compressive PCA, compressive ridge regression, and compressive K-means.

Chat is not available.