Andrew Nere: HPCA Practice talk : Bridging the Semantic Gap: Emulating Biological Neuronal Behaviors with Simple Digital Neurons
The advent of non von Neumann computational models,
specifically neuromorphic architectures, has engendered a
new class of challenges for computer architects. On the
one hand, each neuron-like computational element must
consume minimal power and area to enable scaling up to
biological scales of billions of neurons; this rules out direct
support for complex and expensive features like floating
point and transcendental functions. On the other hand, to
fully benefit from cortical properties and operations,
neuromorphic architectures must support complex non-linear
neuronal behaviors. This semantic gap between the simple
and power-efficient processing elements and complex
neuronal behaviors has rekindled a RISC vs. CISC-like debate
within the neuromorphic hardware design community.
In this paper, we address the aforementioned semantic
gap for a recently-described digital neuromorphic architecture
that constitutes simple Linear-Leak Integrate-and-Fire
(LLIF) spiking neurons as processing primitives. We show
that despite the simplicity of LLIF primitives, a broad class
of complex neuronal behaviors can be emulated by composing
assemblies of such primitives with low area and power
overheads. Furthermore, we demonstrate that for the LLIF
primitives without built-in mechanisms for synaptic plasticity,
two well-known neural learning rules–spike timing dependent
plasticity and Hebbian learning–can be emulated
via assemblies of LLIF primitives. By bridging the semantic
gap for one such system we enable neuromorphic system
developers, in general, to keep their hardware design simple
and power-efficient and at the same time enjoy the benefits of
complex neuronal behaviors essential for robust and
accurate cortical simulation.
