Skip to yearly menu bar Skip to main content


In-Person Poster presentation / top 25% paper

Efficient recurrent architectures through activity sparsity and sparse back-propagation through time

Anand Subramoney · Khaleelulla Khan Nazeer · Mark Schoene · Christian Mayr · David Kappel

MH1-2-3-4 #81

Keywords: [ Deep Learning and representational learning ] [ activity sparsity ] [ gesture recognition ] [ recurrent network ] [ efficiency ] [ dvs ] [ rnn ] [ language modeling ] [ GRU ]


Abstract:

Recurrent neural networks (RNNs) are well suited for solving sequence tasks in resource-constrained systems due to their expressivity and low computational requirements. However, there is still a need to bridge the gap between what RNNs are capable of in terms of efficiency and performance and real-world application requirements. The memory and computational requirements arising from propagating the activations of all the neurons at every time step to every connected neuron, together with the sequential dependence of activations, contribute to the inefficiency of training and using RNNs. We propose a solution inspired by biological neuron dynamics that makes the communication between RNN units sparse and discrete. This makes the backward pass with backpropagation through time (BPTT) computationally sparse and efficient as well. We base our model on the gated recurrent unit (GRU), extending it with units that emit discrete events for communication triggered by a threshold so that no information is communicated to other units in the absence of events. We show theoretically that the communication between units, and hence the computation required for both the forward and backward passes, scales with the number of events in the network. Our model achieves efficiency without compromising task performance, demonstrating competitive performance compared to state-of-the-art recurrent network models in real-world tasks, including language modeling. The dynamic activity sparsity mechanism also makes our model well suited for novel energy-efficient neuromorphic hardware. Code is available at https://github.com/KhaleelKhan/EvNN/.

Chat is not available.