Skip to yearly menu bar Skip to main content


Poster

Stable Recurrent Models

John Miller · Moritz Hardt

Great Hall BC #70

Keywords: [ stability ] [ gradient descent ] [ non-convex optimization ] [ recurrent neural networks ]


Abstract:

Stability is a fundamental property of dynamical systems, yet to this date it has had little bearing on the practice of recurrent neural networks. In this work, we conduct a thorough investigation of stable recurrent models. Theoretically, we prove stable recurrent neural networks are well approximated by feed-forward networks for the purpose of both inference and training by gradient descent. Empirically, we demonstrate stable recurrent models often perform as well as their unstable counterparts on benchmark sequence tasks. Taken together, these findings shed light on the effective power of recurrent networks and suggest much of sequence learning happens, or can be made to happen, in the stable regime. Moreover, our results help to explain why in many cases practitioners succeed in replacing recurrent models by feed-forward models.

Live content is unavailable. Log in and register to view live content