Skip to yearly menu bar Skip to main content


Poster

Kernel RNN Learning (KeRNL)

Christopher Roth · Ingmar Kanitscheider · Ila Fiete

Keywords: [ neural networks ] [ supervised learning ] [ rnns ] [ biologically plausible learning rules ] [ algorithm ]

[ ]
[ PDF
2019 Poster

Abstract:

We describe Kernel RNN Learning (KeRNL), a reduced-rank, temporal eligibility trace-based approximation to backpropagation through time (BPTT) for training recurrent neural networks (RNNs) that gives competitive performance to BPTT on long time-dependence tasks. The approximation replaces a rank-4 gradient learning tensor, which describes how past hidden unit activations affect the current state, by a simple reduced-rank product of a sensitivity weight and a temporal eligibility trace. In this structured approximation motivated by node perturbation, the sensitivity weights and eligibility kernel time scales are themselves learned by applying perturbations. The rule represents another step toward biologically plausible or neurally inspired ML, with lower complexity in terms of relaxed architectural requirements (no symmetric return weights), a smaller memory demand (no unfolding and storage of states over time), and a shorter feedback time.

Chat is not available.