Poster
in
Workshop: Neural Network Weights as a New Data Modality
Unveiling the Potential of Superexpressive Networks in Implicit Neural Representations
Uvini Balasuriya Mudiyanselage · Woojin Cho · Minju Jo · Noseong Park · Kookjin Lee
Keywords: [ implicit neural representations ] [ signal representation ] [ superexpressive networks ]
In this study, we examine the potential of one of the "superexpressive" networks in the context of learning neural functions for representing complex signals and performing machine learning downstream tasks. Our focus is on evaluating their performance on computer vision and scientific machine learning tasks including signal representation/inverse problems and solutions of partial differential equations. Through an empirical investigation in various benchmark tasks, we demonstrate that superexpressive networks, as proposed by [Zhang et al. NeurIPS, 2022], which employ a specialized network structure characterized by having an additional dimension, namely width, depth, and "height", can surpass state-of-the-art implicit neural representations that use highly-specialized nonlinear activation functions.