[Submitted on 17 Sep 2020]

Download PDF

Abstract: We derive the backpropagation algorithm for spiking neural networks composed
of leaky integrate-and-fire neurons operating in continuous time. This
algorithm, EventProp, computes the exact gradient of an arbitrary loss function
of spike times and membrane potentials by backpropagating errors in time. For
the first time, by leveraging methods from optimal control theory, we are able
to backpropagate errors through spike discontinuities and avoid approximations
or smoothing operations. EventProp can be applied to spiking networks with
arbitrary connectivity, including recurrent, convolutional and deep
feed-forward architectures. While we consider the leaky integrate-and-fire
neuron model in this work, our methodology to derive the gradient can be
applied to other spiking neuron models. As errors are backpropagated in an
event-based manner (at spike times), EventProp requires the storage of state
variables only at these times, providing favorable memory requirements. We
demonstrate learning using gradients computed via EventProp in a deep spiking
network using an event-based simulator and a non-linearly separable dataset
encoded using spike time latencies. Our work supports the rigorous study of
gradient-based methods to train spiking neural networks while providing
insights toward the development of learning algorithms in neuromorphic
hardware.

Submission history

From: Christian Pehle [view email]

[v1]
Thu, 17 Sep 2020 15:45:00 UTC (1,925 KB)

Read More

ترك الرد

من فضلك ادخل تعليقك
من فضلك ادخل اسمك هنا