Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Generalization and Estimation Error Bounds for Model-based Neural Networks

Avner Shultzman · Eyar Azar · Miguel Rodrigues · Yonina Eldar

Keywords: [ Theory ] [ generalization error ] [ Model based neural networks ] [ Estimation error ] [ Local Rademacher complexity. ]


Abstract:

Model-based neural networks provide unparalleled performance for various tasks, such as sparse coding and compressed sensing problems. Due to the strong connection with the sensing model, these networks are interpretable and inherit prior structure of the problem. In practice, model-based neural networks exhibit higher generalization capability compared to ReLU neural networks. However, this phenomenon was not addressed theoretically. Here, we leverage complexity measures including the global and local Rademacher complexities, in order to provide upper bounds on the generalization and estimation errors of model-based networks. We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks, and derive practical design rules that allow to construct model-based networks with guaranteed high generalization. We demonstrate through a series of experiments that our theoretical insights shed light on a few behaviours experienced in practice, including the fact that ISTA and ADMM networks exhibit higher generalization abilities (especially for small number of training samples), compared to ReLU networks.

Chat is not available.