Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Neural Network Weights as a New Data Modality

End-to-End Synthesis of Neural Programs in Weight Space

Wenhao Li · Yudong Xu · Elias Khalil · Scott Sanner

Keywords: [ neural subroutines ] [ meta-learning ] [ zero-shot generalization ] [ weight-space program synthesis ]


Abstract: We introduce a framework for *end-to-end*, *zero-shot* synthesis of neuralnetwork parameters, treating the weights themselves as a "program modality".Specifically, we aim to implement $P(a,b,p)\colon (a + b)\bmod p$ for anyprime $p$, where $p$ is the user intent for the subroutine$\bmod_p(a + b)$. Drawing inspiration from symbolic*sketch*-based synthesis, we treat $\bmod_p(a + b)$ as anarbitrary placeholder subroutine and assume it can be approximated by aparametric model $M_{\theta}$. A meta-learner $H_{\phi}$ is then trainedto produce the parameters $\theta$ based on the user-specified $p$. Byallowing gradients to flow from $M_{\theta}$ (the specialized subroutine)back through $H_{\phi}$, our approach learns to approximate the meta program$P(a,b,p)$ and to generate zero-shot functional neural subroutines.Empirically, our meta model achieves near-perfect accuracy on seenprimes and nontrivial generalization to **unseen** primes, outperformingbaseline architectures. These results highlight the promise of weight-spaceprogram synthesis for bridging symbolic logic and gradient-based learning incompositional tasks.

Chat is not available.